Announcement

Collapse
No announcement yet.

Arch Linux's Pacman 6.0 Enters Alpha With Parallel Downloads Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by dirlewanger88

    ...but the crippling slowness of dnf in general more than cancels out any improvement. dnf is by far the worst Linux package manager, by a massive margin.
    I have no idea about what you are talking about.
    In my case dnf is always able to choose fastest mirror to completly staturate my uplink.

    Comment


    • #22
      Originally posted by Nocifer View Post
      Most update packages are so small that the download speed never manages to go faster than some 100s of Kb/s at best before the package is downloaded and the connection resets for the next file. So, imagine downloading multiple small update packages at the same time at a few 100s of Kb/s, and then I think you'll see how this can really speed things up for most people.
      But if you compare the size of the file to the download rate, you're not really losing any performance. If you download a 500KB file in less than a second, you've technically only transferred 500KB/s, so it's going to make the download rate seem slower than it actually is. So, if you're downloading 10x 500KB files in parallel and another batch where you download them in serial, as long as your network connection isn't bottlenecked, they should all complete at roughly the same duration.
      EDIT:
      Whoops made a mistake in my last sentence.
      Last edited by schmidtbag; 04 December 2020, 11:43 AM.

      Comment


      • #23
        Originally posted by microcode View Post
        Package downloads over HTTP/3 would be cool as well. If Arch had that Fedora money maybe they could use zstd diffs.
        That is total BS!

        In terms of downloads, like pacman, you're not going to gain anything with http/3.
        Sure, if you have a flaky connection that drops packets a lot then http/3 might show a really small improvement.
        And by really small i mean that you likely can't even notice it.

        http/3 adds a lot of value for a lot of small connections (say downloading website resources). Not when downloading lots of big files.

        As for zstd diffs.
        You really have no clue what you're talking about.
        There are a couple of problems with that. On the surface of it, binary diffing seems cool and saves a lot of download time!
        But now you need:
        - Host the full version
        - Host the diff from version X to X+1
        Thus for the hoster it is definitely more space to host the arch repository.
        But even if that wouldn't be an issue, you still have the issue of dependencies. A binary diff can only be done if everything else (dependency wise) stays the same. For example, if an updated version suddenly requires a new glibc version (just to name one) then you can forget that binary diff. Doing binary diffs on a package manager level (so pacman in this case) is complicated.

        Comment


        • #24
          Originally posted by markg85 View Post

          That is total BS!

          In terms of downloads, like pacman, you're not going to gain anything with http/3.
          Sure, if you have a flaky connection that drops packets a lot then http/3 might show a really small improvement.
          And by really small i mean that you likely can't even notice it.

          http/3 adds a lot of value for a lot of small connections (say downloading website resources). Not when downloading lots of big files.

          As for zstd diffs.
          You really have no clue what you're talking about.
          There are a couple of problems with that. On the surface of it, binary diffing seems cool and saves a lot of download time!
          But now you need:
          - Host the full version
          - Host the diff from version X to X+1
          Thus for the hoster it is definitely more space to host the arch repository.
          But even if that wouldn't be an issue, you still have the issue of dependencies. A binary diff can only be done if everything else (dependency wise) stays the same. For example, if an updated version suddenly requires a new glibc version (just to name one) then you can forget that binary diff. Doing binary diffs on a package manager level (so pacman in this case) is complicated.
          If only you could get away with X and X+1. But you can't guarantee the client has exactly the previous version installed, usually you have to keep more diffs (usually time-based) handy.

          And yes, http/3 is about lowering the connection overhead (so to speak, since there's no TCP connection), it will do nothing for file transfers. Plus, http/3 is not ratified yet, jumping in too soon can result in unnecessary headaches.
          Last edited by bug77; 05 December 2020, 08:26 PM.

          Comment


          • #25
            Originally posted by ms178 View Post
            Finally, in 2020 we get parallel downloads at least on Arch soon. My Gigabit connection would love to get used properly.
            You just need faster mirrors. 10 MiB/s is not unlikely. reflector can help.

            Comment


            • #26
              Originally posted by ms178 View Post
              Finally, in 2020 we get parallel downloads at least on Arch soon. My Gigabit connection would love to get used properly.
              Here it used to cost $2000 per month and a corporation just to get 1Gbps speeds...

              Comment


              • #27
                Originally posted by dirlewanger88

                ...but the crippling slowness of dnf in general more than cancels out any improvement. dnf is by far the worst Linux package manager, by a massive margin.
                Nope, the worst package manager is APT, since it can destroy your system and literally uninstall it if you make a tiny mistake.

                Plus, the held packages insanity. I explode when it happens.

                Comment


                • #28
                  Since Arch users usually sort their mirror list by server speed, pacman should pick the top 4 and download from them in parallel. That way you're actually taking advantage of increased bandwidth.

                  Comment


                  • #29
                    Originally posted by schmidtbag View Post
                    I wasn't aware there was such a thing as to cap download speeds of individual files. That's pretty crappy. So yeah, I guess for you this would be a big help.

                    EDIT:
                    I just realized but, that kind of ISP is basically encouraging the use of torrenting lol.
                    I didn't either until I started downloading random crap from all over the place just to watch the speeds and see what happened. No matter what I download or where I download it from it tops out around 10 to 11Mbps expect for Speedtest.net where I hit 85Mbps...and I'm supposed to get 100Mbps but there could be a stream or something going on in another room. Once I get enough crap to approach the full, advertised 100Mbps will the individual things get throttled. I guess they figure since that's good enough for a 4K stream, it's good enough to throttle everything else at.

                    Only torrenting with a good VPN solution. Long story short, I've been hit with a few cease and desist letters from HBO over the years.

                    Comment


                    • #30
                      Originally posted by Nocifer View Post
                      I get what you're saying about dnf, but we're talking about pacman's speed and efficiency as a package manager, not how long the overall update process of an Arch system can get if you choose to throw in a bunch of source packages. Pacman's speed in resolving dependencies and installing packages has nothing to do with how fast your PC can compile stuff from source. Also, the AUR packages are unofficial, user-created content and not part of the main repos, so IMHO they shouldn't even be considered as part of the overall update process in the first place. The fact that Arch gives you the ability to automagically compile them and install them via the system's package manager is a handy bonus, not a drawback of the package manager because it takes more time to do the job.
                      While the AUR might be unofficial, some distributions that use pacman include AUR support in their tools and pacman wrappers so AUR upgrade speed does factor in to an Arch upgrade to some extent. My standard process is pacman for system updates, aurutils to build and repo up AUR updates, pacman again to install my AUR updates.

                      FWIW, Arch doesn't give the ability to automagically compile and install them via the system's package manager or even have an official tool to do it. The actual Arch Way is to enter a clean chroot, build it there with makepkg, and then install it on the actual system with pacman. I suppose there's makepkg -sif outside of the chroot, but, IMHO, that should be treated more like a quick hack than the proper way.

                      Comment

                      Working...
                      X