Originally posted by varikonniemi
View Post
Announcement
Collapse
No announcement yet.
Fwupd Switches From XZ To Zstd Compression: More Trust & Slightly Better Performance
Collapse
X
-
Originally posted by arekm View PostIt is in my usage and that's why I asked.
Or you save all the time and use zstd level 15, which is usually within 3 to 1 % of best compressors and substantially faster.
- Likes 6
Comment
-
Originally posted by espi View Post
If your only consideration is size then you can compress each file with every compression algorithm and then pick the smallest one.
End of story, thanks for all hints.
- Likes 1
Comment
-
Originally posted by Weasel View PostZSTD is more trustworthy? What in the world?!? You're telling me you're trusting mfing Facebook more?
Or do you have this conspiratorial mindset and you think that just because Facebook is bad for privacy, it can plant some data miner into ZSTD?
Btw, Facebook is also a large contributor to btrfs file system. By this logic, maybe we all should avoid btrfs like a plague?Last edited by user1; 03 April 2024, 10:17 AM.
- Likes 14
Comment
-
Originally posted by varikonniemi View Postxz should be abandoned. Upstream malaiciousness alone is kind of unforgivable, but from what i have read the format itself is poorly and overly engineered.
- Likes 4
Comment
-
Originally posted by Weasel View PostZSTD is more trustworthy? What in the world?!? You're telling me you're trusting mfing Facebook more?
What if that person had proven indentity? Like you know attended conferences etc. - That i could debate.
Seriously biggest trust has creator of LZMA Igor Pavlov, but linux world decided to fork from him long time ago. You know guy only thanklessly maintained project for 25 years and made certain companies like WinRAR a joke.
- Likes 6
Comment
-
Originally posted by roughl View Post
Size is not everything with regard to compression. For example ArchLinux decided to switch to zstd for it's packages, which increased the package size by 0.8% but, the decompression time saw a ~1300% speedup: https://archlinux.org/news/now-using...e-compression/.
I am considering changing due to the situation, but compression/decompression speed is only one factor and cost for small projects is a big one. It's not just bandwidth costs but also the fact that I use lzma + compiler benchmarks to pick hardware for build nodes. I'd have to go back and look at a new format's performance and try to figure out the impact of package build times. It takes 1-3 days now, depending on a number of conditions including how many packages pass, if I'm using a cache of distfiles, etc.
The knee jerk xz is bad lets go to new shiny format instead takes a lot of time and planning for some things. It's not the same as an end user just deciding that going forward they're going to use zstd.
- Likes 1
Comment
Comment