Announcement

Collapse
No announcement yet.

Fedora 41 Will Try Again To Switch To DNF5 Package Manager

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by User42 View Post

    While your last sentence is true, there are a few rules that just make reading easier.

    The Oxford comma they use there, while correct, is generally unnecessary and frowned upon when not used for disambiguation.
    In English, "while" is a subordinator: putting a comma before it is generally a mistake (and here it it).

    At any rate, having now read the document in its entirety, the "comma" situation is making a mountain out of a molehill... however, I do agree that most sentences are convoluted. It's not that commas are a problem, it's that they seem to like extending sentences with bit and bobs, a bit like they had a thought, and they wanted to add on it, but never really committing to actually end the bloody sentence, when it should have been done before, which actually explains pretty well why they use subordinators with commas, like I am doing presently, because every time the sentence should end, a comma is used to extend it, so it's not really a subordinator problem or a comma problem, it's clearly a writing style problem, with information added when they pop into their mind, which is actually one of the major uses of the semicolon, which is what I should have used to start with.

    Of course none of the sentences are nearly as convoluted as that, but it's the principle; semicolons, or starting a new sentence, would have been better structure-wise.
    Muphry's law is strong, and extremely off-topic, here..

    Comment


    • #22
      Originally posted by EazyVG View Post
      All I can say is that I am happy with openSUSE package manager, one of the best. Now flatpak is just adding extra layer of accessibility.
      Yes, it's so cool it's outrageously slow.​

      Comment


      • #23
        I decided to do a little testing.
        My internet is not as stable or good but regardless
        sudo dnf clean all
        time sudo dnf update
        real 0m25,624s
        user 0m0,006s
        sys 0m0,023s​
        sudo dnf5 clean all
        time sudo dnf5 update
        real 0m18,041s
        user 0m0,008s
        sys 0m0,027s​

        Comment


        • #24
          Originally posted by sophisticles View Post

          When did you get appointed King of the English language?

          They wrote it the way i would have written it.

          I, think, there, are, more, important, things, to, worry, about.
          You're ignorant so I don't care how you'd have written it.

          Comment


          • #25
            Originally posted by fitzie View Post
            I plan on using regular dnf as long as it's maintained. there really is zero value in dnf5 if you have python installed. I would have hoped that they would have addressed the slowness and memory usage of dnf but it doesn't really fix anything.
            Your comment makes no sense to me. It seems to me you have misunderstood some things.

            DNF ("DNF4") is written in Python which is the reason it's so slow. DNF5 which is written in C++ does address the slowness (and they are addressing the memory usage as well, afaik. From what I read it wasn't DNF's fault, at least not fully). It being written in C++ is why the performance overhaul is possible.​

            DNF5's performance benefits will be present regardless of whether Python is installed or not. Why would Python being installed affect anything? Also, why does it matter to you if Python is installed or not? If anything else on your system requires Python, then let it be? If not, then remove it?

            Comment


            • #26
              Originally posted by User42 View Post

              While your last sentence is true, there are a few rules that just make reading easier.

              The Oxford comma they use there, while correct, is generally unnecessary and frowned upon when not used for disambiguation.
              In English, "while" is a subordinator: putting a comma before it is generally a mistake (and here it it).

              At any rate, having now read the document in its entirety, the "comma" situation is making a mountain out of a molehill... however, I do agree that most sentences are convoluted. It's not that commas are a problem, it's that they seem to like extending sentences with bit and bobs, a bit like they had a thought, and they wanted to add on it, but never really committing to actually end the bloody sentence, when it should have been done before, which actually explains pretty well why they use subordinators with commas, like I am doing presently, because every time the sentence should end, a comma is used to extend it, so it's not really a subordinator problem or a comma problem, it's clearly a writing style problem, with information added when they pop into their mind, which is actually one of the major uses of the semicolon, which is what I should have used to start with.

              Of course none of the sentences are nearly as convoluted as that, but it's the principle; semicolons, or starting a new sentence, would have been better structure-wise.
              I didn't read the whole text. My feeling was that your issue was with appositional commas. So not an extension of a sentence but actually an explanation or specification of the scope of a phrase or word within the sentence.

              Does it not check out?

              Comment


              • #27
                Originally posted by Eudyptula View Post
                Your comment makes no sense to me. It seems to me you have misunderstood some things.

                DNF ("DNF4") is written in Python which is the reason it's so slow.
                No, Python is fine because all of the performance sensitive code is written in C within RPM or Libsolv anyway. The real reason it is being rewritten is because microdnf used in minimal environments and dnf was deviating over time and they wanted to consolidate it together. They also wanted to use dnf as a library and Python isn't as suitable for that. The performance impact is mainly from time spend processing metadata (which is among other things used for dependencies based on file paths instead of packages) and Dnf5 cuts down on the metadata considerably by only downloading a common subset of filesystem paths instead of for everything and only downloading the full set if a package uses it (and Fedora packaging guidelines have been changed so nothing in the official repos will use non common paths). That behavior has been backported to Dnf4 in Fedora 40 which really gives you a good portion of the performance and memory usage benefits before even switching to Dnf 5.



                Comment


                • #28
                  Originally posted by Eudyptula View Post
                  Your comment makes no sense to me. It seems to me you have misunderstood some things.

                  DNF ("DNF4") is written in Python which is the reason it's so slow. DNF5 which is written in C++ does address the slowness (and they are addressing the memory usage as well, afaik. From what I read it wasn't DNF's fault, at least not fully). It being written in C++ is why the performance overhaul is possible.​

                  DNF5's performance benefits will be present regardless of whether Python is installed or not. Why would Python being installed affect anything? Also, why does it matter to you if Python is installed or not? If anything else on your system requires Python, then let it be? If not, then remove it?

                  Originally posted by Myownfriend View Post

                  That extremely weird. You said dnf5 has zero value to use over dnf if you have python installed but if that's your impression then there's also zero reason to use dnf over dnf5 just because you have python installed. DNF5 is significantly faster and since Gnome Software would be using the dnf5daemon back-end instead of packagekit, you won't be storing redundant metadata anymore.

                  Using an old version of something just because it has additional dependencies just doesn't make sense.

                  https://www.youtube.com/watch?v=zCy-fZSKe-U
                  ​the performance benifits of dnf5 are not significant. I just did an install of a packge with a hot cache, and both dnf and dnf5 took the same time. with stale cache, you are at the mercy of random webservers serving the repos, and the time spent locally in c or python code are irrelevant.

                  second, dnf5 was based on microdnf, which served to deliver an experience in small footprint installs that don't necessarily have python (hence the micro) when they decided to expand to cover all of dnf use cases they called in dnf5.

                  third, just like the yum to dnf conversion, in which there was a lot of lost features (e.g. yum-plugins-changelog). the conversion from dnf to dnf5 is no different. this is the basis of my recommendation to stick to dnf as long as it is maintained. there is no real-world performance benefit to dnf5 and I'd like people to post results showing otherwise.

                  forth, more to the point, is that the entire model of package resolution syncing full repos is broken. a bunch of this is because of the way too convolution dependency scheme in linux packages. but if you think about things from first principles dnf5 is slow and bloated. and didn't fix any real issues other than not depending on python. I find it funny because there's a lot of 1gb vm's out there, and dnf5 will die due to running out of memory.

                  Looking at what's happening for package managers in the software space, you'd see that dnf is extremely slow from what other's have accomplished. For example I just installed a python package and it's dependencies in less than 1 second without allocating gigs of ram. of course rpm packages are more complicated, but do they really need to be?

                  Code:
                  $ uv pip install pydantic
                  Resolved 4 packages in 808ms
                  Downloaded 4 packages in 128ms
                  Installed 4 packages in 6ms
                   + annotated-types==0.6.0
                   + pydantic==2.6.4
                   + pydantic-core==2.16.3
                   + typing-extensions==4.10.0
                  ​

                  Comment


                  • #29
                    Originally posted by fitzie View Post
                    ​the performance benifits of dnf5 are not significant. I just did an install of a packge with a hot cache, and both dnf and dnf5 took the same time. with stale cache, you are at the mercy of random webservers serving the repos, and the time spent locally in c or python code are irrelevant
                    An upgrade from a stale cache shows a 3.4x increase. That video I linked to in my last post shows that that's because it's processing and downloading metadata in parallel. That's a significant increase for possibly the slowest thing that dnf4/5 can do and dnf5 needs to retrieve less data in some scenarios.

                    These are my times when installing wine-opencl from a hot cache:
                    dnf4 install dnf5 install dnf4 remove dnf5 remove
                    Real 0m2.469s 0m2.052s 0m1.817s 0m1.375s
                    User 0m0.009s 0m0.011s 0m0.011s 0m0.009s
                    Sys 0m0.020s​ 0m0.019s 0m0.020s 0m0.022s​
                    That's a 16% increase for installing and a 24% increase in removing with the same CPU time.

                    And again, dnf5 with share it's cache with dnf5-daemon instead of having a separate cache for dnf4 and packagekit. It's one thing to say you don't care about the performance gains but saying there's zero advantage is just wrong.

                    Btw, you'd want to use "time uv pip install pydantic". With uv's stats, we only know how long individual steps take up and we don't know the cpu time.

                    Code:
                    Resolved 4 packages in 588ms
                    Downloaded 1 package in 198ms
                    Installed 1 package in 11ms
                     + pydantic==2.6.4
                    
                    real    0m0.880s
                    user    0m0.138s
                    sys    0m0.110s
                    ​
                    If we just looked at uv's number then the process took a combined total of 786ms while time shows that it actually took 880ms, 11% longer, and it took a combined 248 ms of CPU time. Installing from cache was obviously significantly better

                    Code:
                    time uv pip install pydantic
                    Resolved 4 packages in 20ms
                    Installed 1 package in 7ms
                     + pydantic==2.6.4
                    
                    real    0m0.045s
                    user    0m0.018s
                    sys    0m0.047s
                    ​That's excellent real time compared to dnf but it uses over twice as much CPU time. Of course they're also different packages from different repos, dnf provides more feedback, and sudo takes some time, too.

                    Comment


                    • #30
                      Originally posted by Myownfriend View Post

                      An upgrade from a stale cache shows a 3.4x increase. That video I linked to in my last post shows that that's because it's processing and downloading metadata in parallel. That's a significant increase for possibly the slowest thing that dnf4/5 can do and dnf5 needs to retrieve less data in some scenarios.

                      These are my times when installing wine-opencl from a hot cache:
                      dnf4 install dnf5 install dnf4 remove dnf5 remove
                      Real 0m2.469s 0m2.052s 0m1.817s 0m1.375s
                      User 0m0.009s 0m0.011s 0m0.011s 0m0.009s
                      Sys 0m0.020s​ 0m0.019s 0m0.020s 0m0.022s​
                      That's a 16% increase for installing and a 24% increase in removing with the same CPU time.

                      And again, dnf5 with share it's cache with dnf5-daemon instead of having a separate cache for dnf4 and packagekit. It's one thing to say you don't care about the performance gains but saying there's zero advantage is just wrong.

                      Btw, you'd want to use "time uv pip install pydantic". With uv's stats, we only know how long individual steps take up and we don't know the cpu time.

                      Code:
                      Resolved 4 packages in 588ms
                      Downloaded 1 package in 198ms
                      Installed 1 package in 11ms
                      + pydantic==2.6.4
                      
                      real 0m0.880s
                      user 0m0.138s
                      sys 0m0.110s
                      ​
                      If we just looked at uv's number then the process took a combined total of 786ms while time shows that it actually took 880ms, 11% longer, and it took a combined 248 ms of CPU time. Installing from cache was obviously significantly better

                      Code:
                      time uv pip install pydantic
                      Resolved 4 packages in 20ms
                      Installed 1 package in 7ms
                      + pydantic==2.6.4
                      
                      real 0m0.045s
                      user 0m0.018s
                      sys 0m0.047s
                      ​That's excellent real time compared to dnf but it uses over twice as much CPU time. Of course they're also different packages from different repos, dnf provides more feedback, and sudo takes some time, too.
                      thanks for the extra data. my performance of dnf and dnf5 are much worse than what you have. I suppose that's because I'm rocking an ancient box right now (E3-1275 v3) . My experience is that dnf/dnf5 are an order of magnitude slower then something like uv, and I've see both dnf/microdnf/dnf5 die due to not enough memory on 1gb of ram. that is really why I think there's bigger issues in dnf.

                      my problem with your data, is that while you can shave off a few seconds of overhead here and there, I don't think there's going to be big savings for typical jobs where most of the time is is dominated by doing work (installing, running scripts, retrieving from the network) all this python penalty will be in the margins, old dnf is already a ton of c code via hawkey librpm and whatever else is there.

                      of course I will switch to it once dnf stops showing up in the repos, but i think this effort is being oversold, and i'm pretty bitter that I had to upgrade my cloud boxes _just_ to support package upgrades, and switching to dnf5 didn't change that fact at all.

                      Comment

                      Working...
                      X