Originally posted by atomsymbol
Announcement
Collapse
No announcement yet.
Arch's Switch To Zstd: ~0.8% Increase In Package Size For ~1300% Speedup In Decompression Time
Collapse
X
-
For anyone wanting more information about this change and the reasoning behind choosing zstd then you should check out the thread on the arch-dev-public mailing list...
- Likes 2
Comment
-
Originally posted by betam4x View Post
Honestly, I'm on the fence about this. For regular package downloads, I have no complaints about decompression time. I understand there are systems with fewer than 32 threads and systems without SSDs, but it really doesn't seem to be that much of a bottleneck. I'd rather see pacman utilize multiple threads for downloading, etc. without having to add an AUR. Package databases could all be updated at the same time, and multiple download threads could easily be specified by a setting in pacman.conf if implemented by pacman.
That being said, I have no complaints about Arch. It runs and works great. It provides the smoothest Linux experience I've ever had while remaining bloat free.
I have no idea how long the decompression itself takes when doing an update. I don't really see the breakdown in the pacman logs. Maybe someone smarter than me knows how we could measure how long each step takes (maybe with the alpm?).
Comment
-
Originally posted by caligula View PostBesides, 30% faster than lz4 sounds almost as fast as memcpy https://github.com/lz4/lz4
Comment
Comment