Mozilla Firefox Switches To .tar.xz For Linux Packaging

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • energyman
    replied
    But with xz you can start downloading the next pkg, while zst is still downloading so across several pkg, xz will win.

    Leave a comment:


  • Anux
    replied
    Originally posted by Weasel View Post
    Decompression is faster if you spend more time compressing
    Hm, strange it actually gets faster up to level 5 then it stagnates.

    It kinda does exclude it because it's completely pointless if the install is much faster than the download. You'll be completely bottlenecked by the download and the installer will sit around idling waiting for the next download.
    Just a theoretical calculation, real numbers depend on your system of course:
    download speed 10 MB/s
    xz decomp speed 10 MB/s
    zstd decomp speed 100 MB/s
    install speed 100 MB/s
    pkg size 200 MB
    pkg size xz 50 MB
    pkg size zstd 65 MB

    Time for xz: 50 / 10 + 50 / 10 + 200 / 100 = 12 s
    Time for zstd: 65 / 10 + 65 / 100 + 200 / 100 = 9 s

    That is a big difference and you can't expect everyone having a high end CPU, that is not what's being sold the most. Obviously the faster the CPU in this case, the less decompression time will have an influence.

    Leave a comment:


  • Weasel
    replied
    Originally posted by Anux View Post
    Of course, they used standard xz settings I think. The reason behind that and why Mozilla is also using standard xz, they want to get their updates out immediately and not wait hours for compression to finish also it would increase resource usage/time on the client side as well.
    Decompression is faster if you spend more time compressing since there's less data to pull through. Increasing dictionary size does increase memory usage when decompressing though.

    Them using default options is either laziness or they have no idea what they are doing. If they wanted least strain while building (but more strain when people download it) they would have used gz or something cheap and crap. But when you build something once and it gets downloaded a million times then spend some fucking time compressing it well.

    Originally posted by Anux View Post
    One doesn't exclude the other, it should be possible to order downloads by dependency's and start decomp/install while the rest is still downloading.
    It kinda does exclude it because it's completely pointless if the install is much faster than the download. You'll be completely bottlenecked by the download and the installer will sit around idling waiting for the next download.

    This is why xz > zstd for this kind of job.

    Leave a comment:


  • Anux
    replied
    Originally posted by Weasel View Post
    I don't think I got such close compression ratios. They probably aren't using max compression for xz, so it's their problem.
    Of course, they used standard xz settings I think. The reason behind that and why Mozilla is also using standard xz, they want to get their updates out immediately and not wait hours for compression to finish also it would increase resource usage/time on the client side as well.
    It also sounds to me like they should have made it parallel or at least install the requirement packages while downloading the next ones first, rather than change every single package's compression method. Work smarter.
    One doesn't exclude the other, it should be possible to order downloads by dependency's and start decomp/install while the rest is still downloading. Not sure why no one has done that jet but maybe this will be a project for me in the future when I have more free time.

    Leave a comment:


  • Weasel
    replied
    Originally posted by Anux View Post
    Exactly, that's why I quoted the thing that says 0,8% size increase. You seem to believe that downloading and decompression happens at the same time but all package managers I know do it serial. So decompression is added to download time.

    It obviously depends on your hardware, if you have a RasPi with Gb-internet then zstd or lz4 is better, if you have a threadripper with a 65k modem then xz would be better.

    But since there is practically no difference in size with the change to zstd people with slower hardware do profit while others hardly notice the difference.
    I don't think I got such close compression ratios. They probably aren't using max compression for xz, so it's their problem.

    It also sounds to me like they should have made it parallel or at least install the requirement packages while downloading the next ones first, rather than change every single package's compression method. Work smarter.

    Leave a comment:


  • intelfx
    replied
    Originally posted by fitzie View Post

    you are ignoring my point. I didn't say it's hard to figure it out, it's just an afterthought for the zstd developers.
    I'm not ignoring your point, I'm rejecting your point. It is an afterthought for the zstd developers because it is an afterthought. It is, in my conviction, not a behavior that anyone should rely on, much less consider a "significant difference" or, worse, a basis for implying negligence.

    Originally posted by fitzie View Post
    I'm sure if and when toybox/busybox adds zstd it will not be trash like it is upstream. for some reason they are still stuck in 2020. I'll let them know.
    They, in fact, are. These kinds of projects tend to severely lag behind the state of the art, and not just in terms of compressors supported. So yes, it is pretty reasonable to say that they are likely still stuck in 2020.

    It's not a bad thing per se, but I prefer to call a spade a spade.
    Last edited by intelfx; 03 December 2024, 05:31 PM.

    Leave a comment:


  • fitzie
    replied
    Originally posted by intelfx View Post

    That's an even more flimsy argument. Out of all the gzip/bzip2/xz man pages, this behavior is only mentioned in passing in one of these, so what's actually monumentally stupid is relying on it.

    Put it other way: if you wrote a script relying on that behavior (or absence thereof), I'd fire you. (And yes, I'm in position where I do make such decisions.)​
    you are ignoring my point. I didn't say it's hard to figure it out, it's just an afterthought for the zstd developers. I'm sure if and when toybox/busybox adds zstd it will not be trash like it is upstream. for some reason they are still stuck in 2020. I'll let them know.

    Leave a comment:


  • intelfx
    replied
    Originally posted by fitzie View Post

    this is the most obvious and stupid default behavior, and you didn't even notice the lack of --exclude-compressed being the default there
    That's an even more flimsy argument. Out of all the gzip/bzip2/xz man pages, this behavior is only mentioned in passing in one of these, so what's actually monumentally stupid is relying on it.

    Put it other way: if you wrote a script relying on that behavior (or absence thereof), I'd fire you. (And yes, I'm in position where I do make such decisions.)​

    Leave a comment:


  • fitzie
    replied
    Originally posted by Anux View Post
    So ill informed:



    Another lie.​

    pzstds options are a subset of zstd and cmd-line structure is exactly the same.
    I'd have to review any shell scripts you write, I'd expect it to have many false assumptions.

    Leave a comment:


  • fitzie
    replied
    Originally posted by intelfx View Post

    So the only "unnecessary difference" you could cite is that zstd uses --keep instead of --rm by default? That's quite a flimsy argument you got there. Besides, the difference is necessary: deleting source files is bad UX, no other CLI tool does that.

    And it has all the same flags, it's just the default that was flipped — so nothing is being "unfamiliar".



    Yeah, like Arch and Fedora and OCI containers and podman (the change to default in F41 was rejected, but it's not really "rejected" as much as "delayed", they are going to implement double-compression and retire gzip later, which is a clear migration path in progress)? Talk about being out of touch

    I really think you're writing from year 2020. Please run that time machine in reverse direction and get back to 2024.
    this is the most obvious and stupid default behavior, and you didn't even notice the lack of --exclude-compressed being the default there, so I don't even think more examples could help you. i could cite a lot more but you can look for yourself if you cared but you dont, just like the zstd developers. i think you are stuck in 1971. I don't think you will make a 16 bit timestamp overflow.

    hope that helps.

    Leave a comment:

Working...
X