Announcement

Collapse
No announcement yet.

Fwupd Switches From XZ To Zstd Compression: More Trust & Slightly Better Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by varikonniemi View Post
    xz should be abandoned. Upstream malaiciousness alone is kind of unforgivable [...]
    I don't think that's a good reason to abandon FOSS at all. If the vulnerability can be removed and the issue fixed, why would you abandon an open-source software project?

    Comment


    • #12
      Originally posted by arekm View Post
      It is in my usage and that's why I asked.
      Asking won't help you much, you need to do your own benchmarks. It highly depends on your files. For Text PPMd might be better, for other stuff LZMA2 based algos with extreme settings (and sufficient RAM) might be the best bet. But every now and then everything gets beaten by zstd long mode.

      Or you save all the time and use zstd level 15, which is usually within 3 to 1 % of best compressors and substantially faster.

      Comment


      • #13
        Originally posted by arekm View Post

        It is in my usage and that's why I asked.
        If your only consideration is size then you can compress each file with every compression algorithm and then pick the smallest one.

        Comment


        • #14
          Originally posted by espi View Post

          If your only consideration is size then you can compress each file with every compression algorithm and then pick the smallest one.
          I did that some time ago when choosing compressor. xz won. But was just reading article with "it ended up yielding compressed metadata around 3% smaller than XZ " and asked if there was any recent improvement in zstd. Also I got my answer - NO, no generic improvement. Just for *that* metadata zstd turned out to be better.

          End of story, thanks for all hints.

          Comment


          • #15
            Originally posted by arekm View Post

            It is in my usage and that's why I asked.
            If you have a use-case you should probably try it on your use-case to know for sure.

            Comment


            • #16
              ZSTD is more trustworthy? What in the world?!? You're telling me you're trusting mfing Facebook more?

              Comment


              • #17
                Originally posted by Weasel View Post
                ZSTD is more trustworthy? What in the world?!? You're telling me you're trusting mfing Facebook more?
                No, you'd rather trust something that's being developed by a single anonymous "you don't know who" who has a chance to plant a backdoor in the library?

                Or do you have this conspiratorial mindset and you think that just because Facebook is bad for privacy, it can plant some data miner into ZSTD?

                Btw, Facebook is also a large contributor to btrfs file system. By this logic, maybe we all should avoid btrfs like a plague?
                Last edited by user1; 03 April 2024, 10:17 AM.

                Comment


                • #18
                  Originally posted by varikonniemi View Post
                  xz should be abandoned. Upstream malaiciousness alone is kind of unforgivable, but from what i have read the format itself is poorly and overly engineered.
                  most of format bad claims are poor claims. But in general .xz utils is outdated comparing to orginal work made by Igor Pavlov (creator of LZMA, LZMA2 and 7zip). It is kinda funny how 7zip faster/better compresses xz stuff than xz-utils itself, without relying on xz-utils.

                  Comment


                  • #19
                    Originally posted by Weasel View Post
                    ZSTD is more trustworthy? What in the world?!? You're telling me you're trusting mfing Facebook more?
                    Would i trust ZSTD over random maintainer with unknown indentity, probably from China who only exist in project a little over 2 years? Yes for sure.

                    What if that person had proven indentity? Like you know attended conferences etc. - That i could debate.

                    Seriously biggest trust has creator of LZMA Igor Pavlov, but linux world decided to fork from him long time ago. You know guy only thanklessly maintained project for 25 years and made certain companies like WinRAR a joke.

                    Comment


                    • #20
                      Originally posted by roughl View Post

                      Size is not everything with regard to compression. For example ArchLinux decided to switch to zstd for it's packages, which increased the package size by 0.8% but, the decompression time saw a ~1300% speedup: https://archlinux.org/news/now-using...e-compression/.
                      It depends on the size of the project too. For my BSD project, package size matters a lot because I pay for most of the mirrors. lzma compression has been the best so far in testing. I'd also need to transition to a different format upon a major release if I wanted to change it. I previously went from bzip2 to lzma (both with libarchive)

                      I am considering changing due to the situation, but compression/decompression speed is only one factor and cost for small projects is a big one. It's not just bandwidth costs but also the fact that I use lzma + compiler benchmarks to pick hardware for build nodes. I'd have to go back and look at a new format's performance and try to figure out the impact of package build times. It takes 1-3 days now, depending on a number of conditions including how many packages pass, if I'm using a cache of distfiles, etc.

                      The knee jerk xz is bad lets go to new shiny format instead takes a lot of time and planning for some things. It's not the same as an end user just deciding that going forward they're going to use zstd.

                      Comment

                      Working...
                      X