Announcement

Collapse
No announcement yet.

Ubuntu 21.10 Compressing Debian Packages With Zstd

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oleid
    replied
    haha, I decompress the zstd archive just to compress it again with btrfs. If only I could skip decompression

    Leave a comment:


  • pranav
    replied
    Originally posted by yoshi314 View Post
    there are quite a few people who really hate zstd because company behind its implementation. same as people who shun selinux for the very same reason.
    ELI5 on this comment would be nice. Selinux is not good because...

    Leave a comment:


  • david-nk
    replied
    Originally posted by uid313 View Post
    The drawback of this is that while Zstd decompresses faster, it also compresses less so the files do get bigger.
    Yes, but the increase is so marginal it doesn't matter. If we assume the 1% average increase in package size is accurate and we take a Firefox-sized package, we save 0.05s of download time on a 100mbps connection when using XZ instead of zstd, but we pay more than 3 seconds of additional decompression time. On a slow CPU or some shitty VPS, it will be well above 10 seconds of additional decompression time. Not to mention that the vast majority of package installations happens inside data centers with 1/10/100 Gbps lines and Debian packages are served from the local network instead of the official Debian/Ubuntu servers.

    I wonder what compression algorithm Apple and Microsoft use for their OS updates.
    Deflate/zlib and Apple theoretically also supports bzip2. Wouldn't take lessons from either of them if I were you.

    Leave a comment:


  • iruoy
    replied
    Originally posted by jacob View Post
    Are there any benchmarks for this? Intuitively I would have thought that decompression times would be insignificant compared to download times (and the actual disk writes) and a slower but higher ratio compressor would thus lead to better performance overall?

    Recompressing all packages to zstd with our options yields a total ~0.8% increase in package size on all of our packages combined, but the decompression time for all packages saw a ~1300% speedup.
    They were using XZ compression before. I'm not sure what Debian/Ubuntu are using at the moment.

    Leave a comment:


  • uid313
    replied
    The drawback of this is that while Zstd decompresses faster, it also compresses less so the files do get bigger.

    I wonder what compression algorithm Apple and Microsoft use for their OS updates.

    Leave a comment:


  • discordian
    replied
    Originally posted by jacob View Post
    Are there any benchmarks for this? Intuitively I would have thought that decompression times would be insignificant compared to download times (and the actual disk writes) and a slower but higher ratio compressor would thus lead to better performance overall?
    If you would download/decompress in parallel (potential on more cores) then yes. They way dpkg/apt work is however completely serial so decompression comes afterwards, and no parallelism in sight. Further it calls sync quite often in an attempt to improve reliability (stalls any pending writes).

    So no, decompression takes up alot of time. At the expense of using more memory you could mitigate that.

    Leave a comment:


  • cl333r
    replied
    Originally posted by yoshi314 View Post
    there are quite a few people who really hate zstd because company behind its implementation. same as people who shun selinux for the very same reason.
    As long as it's truly open source I don't mind much.
    What I really like about zstd is that it finally transitions the Linux stack to use a modern/easy API rather than the old convoluted mess that is zlib.

    Zstandard API is designed with learning curve in mind. At the top, you'll find simple methods, using trivial arguments and behavior. Then, at each new paragraph, the API introduces new concepts and parameters, giving gradually more control for advanced usages. [1]

    [1] https://facebook.github.io/zstd/

    Leave a comment:


  • arun54321
    replied
    Originally posted by jacob View Post
    Are there any benchmarks for this? Intuitively I would have thought that decompression times would be insignificant compared to download times (and the actual disk writes) and a slower but higher ratio compressor would thus lead to better performance overall?
    You search for benchmarks on arch forums.

    Leave a comment:


  • yoshi314
    replied
    there are quite a few people who really hate zstd because company behind its implementation. same as people who shun selinux for the very same reason.

    Leave a comment:


  • lyamc
    replied
    Originally posted by caligula View Post
    "Facebook-developed Zstandard compression technology"

    I'm pretty sure zstd was invented long before the developer was hired by FB.
    Hence why he used the word "developed" and not "invented"

    Leave a comment:

Working...
X