Announcement

Collapse
No announcement yet.

Arch's Switch To Zstd: ~0.8% Increase In Package Size For ~1300% Speedup In Decompression Time

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by atomsymbol

    Unfortunately, pixz isn't fully command-line compatible with xz. I would need to create a wrapper (with path such as $HOME/bin/shared/xz) which decides whether to call /usr/bin/xz or /usr/bin/pixz depending on whether the xz-compatible command-line options passed to the wrapper can be translated to pixz's syntax.
    The point is, package managers could use whatever, it's possible as pixz demonstrates.

    Comment


    • #42
      For anyone wanting more information about this change and the reasoning behind choosing zstd then you should check out the thread on the arch-dev-public mailing list...

      Comment


      • #43
        Originally posted by betam4x View Post

        Honestly, I'm on the fence about this. For regular package downloads, I have no complaints about decompression time. I understand there are systems with fewer than 32 threads and systems without SSDs, but it really doesn't seem to be that much of a bottleneck. I'd rather see pacman utilize multiple threads for downloading, etc. without having to add an AUR. Package databases could all be updated at the same time, and multiple download threads could easily be specified by a setting in pacman.conf if implemented by pacman.

        That being said, I have no complaints about Arch. It runs and works great. It provides the smoothest Linux experience I've ever had while remaining bloat free.
        For multiple thread downloading, I agree with you that would be cool (aka powerpill) if that was default in pacman. But you can do both. You can have parallel downloads and faster decompression speed.

        I have no idea how long the decompression itself takes when doing an update. I don't really see the breakdown in the pacman logs. Maybe someone smarter than me knows how we could measure how long each step takes (maybe with the alpm?).

        Comment


        • #44
          Originally posted by caligula View Post
          Besides, 30% faster than lz4 sounds almost as fast as memcpy https://github.com/lz4/lz4
          Your comment reminded me of a project called blosc. It can actually decompress in memory faster than memcpy in some scenarios.

          Comment

          Working...
          X