Announcement

Collapse
No announcement yet.

Arch Linux's Pacman 6.0 Enters Alpha With Parallel Downloads Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by dirlewanger88

    ...but the crippling slowness of dnf in general more than cancels out any improvement. dnf is by far the worst Linux package manager, by a massive margin.
    Robust and full of features comes at a price. That price is usually slowness. Besides, it's a package manager. Most people only interact with it when they first install a system and get everything setup and then when they do updates later on. The package manager taking a bit longer to make sure all the selected configurations and choices jive so myself or the update aren't shooting ourselves in the foot in those limited instances is a slowness I can live with.

    When I factor in Aurutils and all my AUR steps, compiling everything, etc, Arch and Manjaro are some of the slowest Linux systems to update regardless of how improved Pacman gets. Fedora and dnf doesn't look so slow in perspective.

    Comment


    • #12
      Originally posted by fguerraz View Post
      That is SO 1990s and not the solution. This:
      • is a very sad attempt at working around the poor design of TCP congestion control
      • will put extra load on servers for no good reasons (and they might even stop wanting to offer free bandwidth to the project)
      • will have no return on investment as soon as everybody has switched to the new version
      Please, time prove me wrong!
      No need for time, logic proves you wrong: it's still the same total amount of data that gets downloaded. If clients download faster, the chance for simultaneous downloads is actually lower than before.
      ROI is debatable. At the end of the day, everybody gets speedier downloads. But most of the time, the time to reboot after an update causes more downtime than the update process itself (Linux boots fast, but there's the UEFI that also needs to initialize).

      Comment


      • #13
        In some ways I would like to see a little more work allowing users to be *less* tied to the internet.

        So far, running an offline isolated instance of Arch (and many other Linux/BSDs) is fairly scatty and fiddly.
        I wouldn't say *any* packaging system makes this particularly easy. They all follow the same "slurp from the internet" paradigm.

        The closest is probably things like FlatPack, but they are still fairly ad-hoc and bring their own sets of issues.

        Comment


        • #14
          Originally posted by bug77 View Post

          No need for time, logic proves you wrong: it's still the same total amount of data that gets downloaded. If clients download faster, the chance for simultaneous downloads is actually lower than before.
          By your logic then, can you explain why single downloads are slow? Why does one single TCP connection doesn't use all of your available bandwidth?

          Comment


          • #15
            Oh hell yes! This should really speed things up.

            Comment


            • #16
              Surprised nobody mentioned powerpill. I have been using it for a while now, since I discovered my updates are downloading at half speed.

              Comment


              • #17
                Originally posted by skeevy420 View Post
                I have a shitty ISP so it'll help me. They cap individual files around 10-11mb/s, but I can do up to 9 or 10 of them at a time before they're all throttled.
                I wasn't aware there was such a thing as to cap download speeds of individual files. That's pretty crappy. So yeah, I guess for you this would be a big help.

                EDIT:
                I just realized but, that kind of ISP is basically encouraging the use of torrenting lol.
                Last edited by schmidtbag; 04 December 2020, 11:28 AM.

                Comment


                • #18
                  Originally posted by schmidtbag View Post
                  That being said, I don't really see how this will speed things up for most people. If your internet connection is faster than the server's, downloading multiple files isn't going to speed up anything unless the other downloads are coming from different mirrors. If it downloads from multiple mirrors, then I could see this being very beneficial to anyone with very fast internet.
                  Most update packages are so small that the download speed never manages to go faster than some 100s of Kb/s at best before the package is downloaded and the connection resets for the next file. So, imagine downloading multiple small update packages at the same time at a few 100s of Kb/s, and then I think you'll see how this can really speed things up for most people.

                  Originally posted by skeevy420 View Post
                  When I factor in Aurutils and all my AUR steps, compiling everything, etc, Arch and Manjaro are some of the slowest Linux systems to update regardless of how improved Pacman gets. Fedora and dnf doesn't look so slow in perspective.
                  I get what you're saying about dnf, but we're talking about pacman's speed and efficiency as a package manager, not how long the overall update process of an Arch system can get if you choose to throw in a bunch of source packages. Pacman's speed in resolving dependencies and installing packages has nothing to do with how fast your PC can compile stuff from source. Also, the AUR packages are unofficial, user-created content and not part of the main repos, so IMHO they shouldn't even be considered as part of the overall update process in the first place. The fact that Arch gives you the ability to automagically compile them and install them via the system's package manager is a handy bonus, not a drawback of the package manager because it takes more time to do the job.

                  Originally posted by kpedersen View Post
                  In some ways I would like to see a little more work allowing users to be *less* tied to the internet.

                  So far, running an offline isolated instance of Arch (and many other Linux/BSDs) is fairly scatty and fiddly.
                  I wouldn't say *any* packaging system makes this particularly easy. They all follow the same "slurp from the internet" paradigm.

                  The closest is probably things like FlatPack, but they are still fairly ad-hoc and bring their own sets of issues.
                  Can't help but wonder, how exactly do you mean that? I mean, how is running an offline instance of Arch (or any other OS) scatty and fiddly? You simply freeze it in time (i.e. never update it) and all's well. In what way would a packaging system make this particularly easy (or hard) if there are never any updates to install?

                  Comment


                  • #19
                    Originally posted by dirlewanger88

                    ...but the crippling slowness of dnf in general more than cancels out any improvement. dnf is by far the worst Linux package manager, by a massive margin.
                    what are you talking about? dnf isn't slow, it's about normal for a package manager, taking effectively no time to resolve dependencies and installations taking a fraction of a second per package. Obviously download times depend on your connection but dnf has you covered there too with delta packages that only download partial files for updates most of the time. On the other hand apt (or to be more specific dpkg) and portage are glacial.

                    Comment


                    • #20
                      Originally posted by fguerraz View Post

                      By your logic then, can you explain why single downloads are slow? Why does one single TCP connection doesn't use all of your available bandwidth?
                      Yes, I can: it's not about your capacity, it's about server capacity.
                      Linux doesn't have the means to determine which mirror is the fastest and pull from that one. It pulls from whatever mirror you set during setup and that's it.

                      Comment

                      Working...
                      X