Announcement

Collapse
No announcement yet.

Arch's Switch To Zstd: ~0.8% Increase In Package Size For ~1300% Speedup In Decompression Time

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Arch's Switch To Zstd: ~0.8% Increase In Package Size For ~1300% Speedup In Decompression Time

    Phoronix: Arch's Switch To Zstd: ~0.8% Increase In Package Size For ~1300% Speedup In Decompression Time

    Arch Linux has been working the past several months on transitioning to Zstd-compressed packages in place of XZ compression for faster package installation. At the end of December that package compression scheme changed and the results are impressive...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Are there any absolute numbers or benchmarks available?

    I didn't find any after a short search. 1300% sounds impressive but, personally, I'd rather save space (or rather disk and net I/O) than compute time. IMHO network speed and latencies add more time than decompression time.

    Comment


    • #3
      0.8 vs 1300.
      Misel:
      You serious, really?

      Comment


      • #4
        I'll take a 1% size penalty any day when the trade is more than tenfold increase in decompression time. I mean it's literally an order of magnitude performance increase for a statistical rounding error of size.

        Comment


        • #5
          This change should be extremely noticeable when upgrading :-) Whenever I tried other distributions in the past, the speed of pacman upgrades always brought me back to Arch.

          Comment


          • #6
            Originally posted by Misel View Post
            Are there any absolute numbers or benchmarks available?

            I didn't find any after a short search. 1300% sounds impressive but, personally, I'd rather save space (or rather disk and net I/O) than compute time. IMHO network speed and latencies add more time than decompression time.
            They used a sample size of 545 packages. It's linked right in the article.

            Comment


            • #7
              Misel really ? how often are you updating your arch to be bother by such a "major" 0.8% increase in size and be troubled by your disk and net i/o? i can understand if you are on 3g/4g/5g network where traffic might be a problem but even so the increase in size is not that high. you can always clear the cached files if you running out of space.
              i personally don't really care as long as it my system works. i don't care if its faster installation or downloads faster as long as the system works.

              Comment


              • #8
                Since when is a size, seriously it's only .8%, a serious concern these days? It sounds like this change has a big impact on installation/upgrade time. That should certainly impact downtime and slowdowns due to updating and installing packages.
                And if size us a truly serious concern, go with a package manager that does differ instead of full package downloads. Unless you constantly clear the cache this eliminating the option of diff downloads.

                Comment


                • #9
                  Originally posted by loganj View Post
                  Misel really ? how often are you updating your arch to be bother by such a "major" 0.8% increase in size and be troubled by your disk and net i/o? i can understand if you are on 3g/4g/5g network where traffic might be a problem but even so the increase in size is not that high. you can always clear the cached files if you running out of space.
                  i personally don't really care as long as it my system works. i don't care if its faster installation or downloads faster as long as the system works.
                  As I do run some unattended package upgrades on normal Arch boxes, a faster upgrade installation process reduces the risk of a failed upgrade. Downloads can be resumed without issues, but if you reset your system right in the middle of an upgrade process it could brake unexpectedly. Before anybody tells me that this is not recommended, I need to spend less time fixing broken upgrades then with upgrading all systems manually ;-)

                  Comment


                  • #10
                    I guess, I should have put more emphasis on disk I/O and network speed rather than the actual required space. The actual space these days is not an issue. Most storage devices are more than large enough for the extra .8%.

                    What I was getting at - or better trying to - was that 1300% speed up could mean 1ms instead of 13ms. As far as I can. tell they only mention the actual decompression - but not the download and disc access time. (Again, I may be wrong on that one).

                    So are there any benchmarks against other algorithms. E.g. bz2 which is much slower but has a much better compression in comparison. So download time and disc reads should be much faster. The question is, though, how do they factor in?

                    So again, are there any benchmarks with absolute numbers?

                    Comment

                    Working...
                    X