Mozilla Firefox Switches To .tar.xz For Linux Packaging
Collapse
X
-
But with xz you can start downloading the next pkg, while zst is still downloading so across several pkg, xz will win.
-
-
Originally posted by Weasel View PostDecompression is faster if you spend more time compressing
It kinda does exclude it because it's completely pointless if the install is much faster than the download. You'll be completely bottlenecked by the download and the installer will sit around idling waiting for the next download.
download speed 10 MB/s
xz decomp speed 10 MB/s
zstd decomp speed 100 MB/s
install speed 100 MB/s
pkg size 200 MB
pkg size xz 50 MB
pkg size zstd 65 MB
Time for xz: 50 / 10 + 50 / 10 + 200 / 100 = 12 s
Time for zstd: 65 / 10 + 65 / 100 + 200 / 100 = 9 s
That is a big difference and you can't expect everyone having a high end CPU, that is not what's being sold the most. Obviously the faster the CPU in this case, the less decompression time will have an influence.
Leave a comment:
-
-
Originally posted by Anux View PostOf course, they used standard xz settings I think. The reason behind that and why Mozilla is also using standard xz, they want to get their updates out immediately and not wait hours for compression to finish also it would increase resource usage/time on the client side as well.
Them using default options is either laziness or they have no idea what they are doing. If they wanted least strain while building (but more strain when people download it) they would have used gz or something cheap and crap. But when you build something once and it gets downloaded a million times then spend some fucking time compressing it well.
Originally posted by Anux View PostOne doesn't exclude the other, it should be possible to order downloads by dependency's and start decomp/install while the rest is still downloading.
This is why xz > zstd for this kind of job.
Leave a comment:
-
-
Originally posted by Weasel View PostI don't think I got such close compression ratios. They probably aren't using max compression for xz, so it's their problem.
It also sounds to me like they should have made it parallel or at least install the requirement packages while downloading the next ones first, rather than change every single package's compression method. Work smarter.
Leave a comment:
-
-
Originally posted by Anux View PostExactly, that's why I quoted the thing that says 0,8% size increase. You seem to believe that downloading and decompression happens at the same time but all package managers I know do it serial. So decompression is added to download time.
It obviously depends on your hardware, if you have a RasPi with Gb-internet then zstd or lz4 is better, if you have a threadripper with a 65k modem then xz would be better.
But since there is practically no difference in size with the change to zstd people with slower hardware do profit while others hardly notice the difference.
It also sounds to me like they should have made it parallel or at least install the requirement packages while downloading the next ones first, rather than change every single package's compression method. Work smarter.
Leave a comment:
-
-
Originally posted by fitzie View Post
you are ignoring my point. I didn't say it's hard to figure it out, it's just an afterthought for the zstd developers.
Originally posted by fitzie View PostI'm sure if and when toybox/busybox adds zstd it will not be trash like it is upstream. for some reason they are still stuck in 2020. I'll let them know.
It's not a bad thing per se, but I prefer to call a spade a spade.Last edited by intelfx; 03 December 2024, 05:31 PM.
Leave a comment:
-
-
Originally posted by intelfx View Post
That's an even more flimsy argument. Out of all the gzip/bzip2/xz man pages, this behavior is only mentioned in passing in one of these, so what's actually monumentally stupid is relying on it.
Put it other way: if you wrote a script relying on that behavior (or absence thereof), I'd fire you. (And yes, I'm in position where I do make such decisions.)
Leave a comment:
-
-
Originally posted by fitzie View Post
this is the most obvious and stupid default behavior, and you didn't even notice the lack of --exclude-compressed being the default there
Put it other way: if you wrote a script relying on that behavior (or absence thereof), I'd fire you. (And yes, I'm in position where I do make such decisions.)
Leave a comment:
-
-
Originally posted by Anux View Post
Leave a comment:
-
-
Originally posted by intelfx View Post
So the only "unnecessary difference" you could cite is that zstd uses --keep instead of --rm by default? That's quite a flimsy argument you got there. Besides, the difference is necessary: deleting source files is bad UX, no other CLI tool does that.
And it has all the same flags, it's just the default that was flipped — so nothing is being "unfamiliar".
Yeah, like Arch and Fedora and OCI containers and podman (the change to default in F41 was rejected, but it's not really "rejected" as much as "delayed", they are going to implement double-compression and retire gzip later, which is a clear migration path in progress)? Talk about being out of touch
I really think you're writing from year 2020. Please run that time machine in reverse direction and get back to 2024.
hope that helps.
Leave a comment:
-
Leave a comment: