Announcement

Collapse
No announcement yet.

Fedora Considers Dropping Delta RPMs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fedora Considers Dropping Delta RPMs

    Phoronix: Fedora Considers Dropping Delta RPMs

    For many years now there has been delta RPM support built into Fedora to allow just downloading the binary difference between the currently installed RPM package and the updated version. While this made sense during the days of limited Internet connectivity/bandwidth, delta RPMs haven't proven useful in years and now Fedora Linux is considering removing this support...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    fedora's implementation was always half-retarded.

    downloading 60mb of info just to maybe save 2.5mb!

    suse's (leap as TW, sadly, doesnt support it) is way better

    Comment


    • #3
      I think reading the devel discussion is instructive here why drpms are no longer useful, and perhaps may not have been very useful even when bandwidth was more constrained:

      Richard Jones from RedHat asked:

      It's also been a long time since I've seen any benefit. Can anyone summarise quickly why delta RPMs don't work? It seems like they "obviously" should be possible, because there must be a lot of commonality in the content of adjacent RPM versions ...

      And the direct reply from Neal Gompa:

      They don't work because we compose updates wrong. Instead of building on top of previous updates, we throw them away and rebuild from the latest. Since we don't merge previous composes into a new compose, we are missing too much stuff for DeltaRPMs to be continuously useful.
      "Wrong" is an interesting choice of words in the reply. It implies that Fedora's update system in practice doesn't really utilize the delta RPM procedure properly to begin with and perhaps never has? It depends on how far back the Fedora package maintainers have not been basing new package updates on previous release packages.

      Comment


      • #4
        Originally posted by stormcrow View Post
        I think reading the devel discussion is instructive here why drpms are no longer useful, and perhaps may not have been very useful even when bandwidth was more constrained:

        Richard Jones from RedHat asked:




        And the direct reply from Neal Gompa:



        "Wrong" is an interesting choice of words in the reply. It implies that Fedora's update system in practice doesn't really utilize the delta RPM procedure properly to begin with and perhaps never has? It depends on how far back the Fedora package maintainers have not been basing new package updates on previous release packages.
        It's not the choice of Fedora package maintainers how the systems push updates. Maintainers are still building updates on top of the previous, but the systems (for various reasons) do not keep the old updates around on the mirrors. So at any given point, you have GA and latest updates, but nothing in between.

        Comment


        • #5
          Originally posted by stormcrow View Post
          I think reading the devel discussion is instructive here why drpms are no longer useful, and perhaps may not have been very useful even when bandwidth was more constrained:

          Richard Jones from RedHat asked:




          And the direct reply from Neal Gompa:



          "Wrong" is an interesting choice of words in the reply. It implies that Fedora's update system in practice doesn't really utilize the delta RPM procedure properly to begin with and perhaps never has? It depends on how far back the Fedora package maintainers have not been basing new package updates on previous release packages.
          So they messed up. Makes more sense than "it's not needed now when the Internet is faster", because I have yet to find a person that would download 600MB of updates, instead of 60MB. As a rolling-release user, I see that big a download quite often.

          Comment


          • #6
            I always find a way to save bandwidth and enable feature like this, but I always have a doubt the delta doesn't work. Like how can I be sure it works properly? That said, I do have limited bandwidth in a big part of my time. So I don't update my system very often. Kinda help me reflect myself, as a former arch linux user, the habit of pacman -Syu is not really useful *most of the time*, I mean, beside browser, I don't see the benefit of updating your system every single day.

            Comment


            • #7
              Originally posted by mirmirmir View Post
              I always find a way to save bandwidth and enable feature like this, but I always have a doubt the delta doesn't work. Like how can I be sure it works properly? That said, I do have limited bandwidth in a big part of my time. So I don't update my system very often. Kinda help me reflect myself, as a former arch linux user, the habit of pacman -Syu is not really useful *most of the time*, I mean, beside browser, I don't see the benefit of updating your system every single day.
              Perhaps using some tools like rsync/zsync will be a more robust solution to this?

              Comment


              • #8
                Originally posted by bug77 View Post

                So they messed up. Makes more sense than "it's not needed now when the Internet is faster", because I have yet to find a person that would download 600MB of updates, instead of 60MB. As a rolling-release user, I see that big a download quite often.
                This perfectly captures my concerns. They should've explained that part first. Removing things simply because they aren't a strict necessity is bad.

                Comment


                • #9
                  Originally posted by mirmirmir View Post
                  I always find a way to save bandwidth and enable feature like this, but I always have a doubt the delta doesn't work. Like how can I be sure it works properly? That said, I do have limited bandwidth in a big part of my time. So I don't update my system very often. Kinda help me reflect myself, as a former arch linux user, the habit of pacman -Syu is not really useful *most of the time*, I mean, beside browser, I don't see the benefit of updating your system every single day.
                  It does work, for example Google Chrome pushes updates by courgette (that is more efficient then BSDdiff). You basicly take binary file old, binary file new and generate diff file to apply it. I was using that tool in one modding community of game because we had to modify main file of game but we didn't want to publish it for legal issues game so we were sharing mods as modification done on top of orginal file. That kinda ensured every user had to buy game anyway.

                  The issue from point of view of maintener of packages is that to be efficient you would have to generate more then 1 diff. What if someone upgrades not from version 0.9 to 0.10 but from version 0.5 to 0.10? At that point downloading 5 diffs might be more costly then direct entire redownload. Balancing how many diffs you generate is quite hard task.

                  Comment


                  • #10
                    I always wondered why programs didn't just use something like rsync, tar up the files that comprise the package at it's desired state, and compare the local SHA256 of the tar with the one on the server.

                    Comment

                    Working...
                    X