Originally posted by User42
View Post
Announcement
Collapse
No announcement yet.
Fedora 41 Will Try Again To Switch To DNF5 Package Manager
Collapse
X
-
I decided to do a little testing.
My internet is not as stable or good but regardless
sudo dnf clean all
time sudo dnf update
real 0m25,624s
user 0m0,006s
sys 0m0,023ssudo dnf5 clean all
time sudo dnf5 update
real 0m18,041s
user 0m0,008s
sys 0m0,027s
- Likes 2
Comment
-
-
Originally posted by fitzie View PostI plan on using regular dnf as long as it's maintained. there really is zero value in dnf5 if you have python installed. I would have hoped that they would have addressed the slowness and memory usage of dnf but it doesn't really fix anything.
DNF ("DNF4") is written in Python which is the reason it's so slow. DNF5 which is written in C++ does address the slowness (and they are addressing the memory usage as well, afaik. From what I read it wasn't DNF's fault, at least not fully). It being written in C++ is why the performance overhaul is possible.
DNF5's performance benefits will be present regardless of whether Python is installed or not. Why would Python being installed affect anything? Also, why does it matter to you if Python is installed or not? If anything else on your system requires Python, then let it be? If not, then remove it?
- Likes 2
Comment
-
Originally posted by User42 View Post
While your last sentence is true, there are a few rules that just make reading easier.
The Oxford comma they use there, while correct, is generally unnecessary and frowned upon when not used for disambiguation.
In English, "while" is a subordinator: putting a comma before it is generally a mistake (and here it it).
At any rate, having now read the document in its entirety, the "comma" situation is making a mountain out of a molehill... however, I do agree that most sentences are convoluted. It's not that commas are a problem, it's that they seem to like extending sentences with bit and bobs, a bit like they had a thought, and they wanted to add on it, but never really committing to actually end the bloody sentence, when it should have been done before, which actually explains pretty well why they use subordinators with commas, like I am doing presently, because every time the sentence should end, a comma is used to extend it, so it's not really a subordinator problem or a comma problem, it's clearly a writing style problem, with information added when they pop into their mind, which is actually one of the major uses of the semicolon, which is what I should have used to start with.
Of course none of the sentences are nearly as convoluted as that, but it's the principle; semicolons, or starting a new sentence, would have been better structure-wise.
Does it not check out?
Comment
-
Originally posted by Eudyptula View PostYour comment makes no sense to me. It seems to me you have misunderstood some things.
DNF ("DNF4") is written in Python which is the reason it's so slow.
Comment
-
Originally posted by Eudyptula View PostYour comment makes no sense to me. It seems to me you have misunderstood some things.
DNF ("DNF4") is written in Python which is the reason it's so slow. DNF5 which is written in C++ does address the slowness (and they are addressing the memory usage as well, afaik. From what I read it wasn't DNF's fault, at least not fully). It being written in C++ is why the performance overhaul is possible.
DNF5's performance benefits will be present regardless of whether Python is installed or not. Why would Python being installed affect anything? Also, why does it matter to you if Python is installed or not? If anything else on your system requires Python, then let it be? If not, then remove it?
Originally posted by Myownfriend View Post
That extremely weird. You said dnf5 has zero value to use over dnf if you have python installed but if that's your impression then there's also zero reason to use dnf over dnf5 just because you have python installed. DNF5 is significantly faster and since Gnome Software would be using the dnf5daemon back-end instead of packagekit, you won't be storing redundant metadata anymore.
Using an old version of something just because it has additional dependencies just doesn't make sense.
https://www.youtube.com/watch?v=zCy-fZSKe-U
second, dnf5 was based on microdnf, which served to deliver an experience in small footprint installs that don't necessarily have python (hence the micro) when they decided to expand to cover all of dnf use cases they called in dnf5.
third, just like the yum to dnf conversion, in which there was a lot of lost features (e.g. yum-plugins-changelog). the conversion from dnf to dnf5 is no different. this is the basis of my recommendation to stick to dnf as long as it is maintained. there is no real-world performance benefit to dnf5 and I'd like people to post results showing otherwise.
forth, more to the point, is that the entire model of package resolution syncing full repos is broken. a bunch of this is because of the way too convolution dependency scheme in linux packages. but if you think about things from first principles dnf5 is slow and bloated. and didn't fix any real issues other than not depending on python. I find it funny because there's a lot of 1gb vm's out there, and dnf5 will die due to running out of memory.
Looking at what's happening for package managers in the software space, you'd see that dnf is extremely slow from what other's have accomplished. For example I just installed a python package and it's dependencies in less than 1 second without allocating gigs of ram. of course rpm packages are more complicated, but do they really need to be?
Code:$ uv pip install pydantic Resolved 4 packages in 808ms Downloaded 4 packages in 128ms Installed 4 packages in 6ms + annotated-types==0.6.0 + pydantic==2.6.4 + pydantic-core==2.16.3 + typing-extensions==4.10.0
Comment
-
Originally posted by fitzie View Postthe performance benifits of dnf5 are not significant. I just did an install of a packge with a hot cache, and both dnf and dnf5 took the same time. with stale cache, you are at the mercy of random webservers serving the repos, and the time spent locally in c or python code are irrelevant
These are my times when installing wine-opencl from a hot cache:That's a 16% increase for installing and a 24% increase in removing with the same CPU time.dnf4 install dnf5 install dnf4 remove dnf5 remove Real 0m2.469s 0m2.052s 0m1.817s 0m1.375s User 0m0.009s 0m0.011s 0m0.011s 0m0.009s Sys 0m0.020s 0m0.019s 0m0.020s 0m0.022s
And again, dnf5 with share it's cache with dnf5-daemon instead of having a separate cache for dnf4 and packagekit. It's one thing to say you don't care about the performance gains but saying there's zero advantage is just wrong.
Btw, you'd want to use "time uv pip install pydantic". With uv's stats, we only know how long individual steps take up and we don't know the cpu time.
Code:Resolved 4 packages in 588ms Downloaded 1 package in 198ms Installed 1 package in 11ms + pydantic==2.6.4 real 0m0.880s user 0m0.138s sys 0m0.110s
Code:time uv pip install pydantic Resolved 4 packages in 20ms Installed 1 package in 7ms + pydantic==2.6.4 real 0m0.045s user 0m0.018s sys 0m0.047s
- Likes 1
Comment
-
Originally posted by Myownfriend View Post
An upgrade from a stale cache shows a 3.4x increase. That video I linked to in my last post shows that that's because it's processing and downloading metadata in parallel. That's a significant increase for possibly the slowest thing that dnf4/5 can do and dnf5 needs to retrieve less data in some scenarios.
These are my times when installing wine-opencl from a hot cache:That's a 16% increase for installing and a 24% increase in removing with the same CPU time.dnf4 install dnf5 install dnf4 remove dnf5 remove Real 0m2.469s 0m2.052s 0m1.817s 0m1.375s User 0m0.009s 0m0.011s 0m0.011s 0m0.009s Sys 0m0.020s 0m0.019s 0m0.020s 0m0.022s
And again, dnf5 with share it's cache with dnf5-daemon instead of having a separate cache for dnf4 and packagekit. It's one thing to say you don't care about the performance gains but saying there's zero advantage is just wrong.
Btw, you'd want to use "time uv pip install pydantic". With uv's stats, we only know how long individual steps take up and we don't know the cpu time.
Code:Resolved 4 packages in 588ms Downloaded 1 package in 198ms Installed 1 package in 11ms + pydantic==2.6.4 real 0m0.880s user 0m0.138s sys 0m0.110s
Code:time uv pip install pydantic Resolved 4 packages in 20ms Installed 1 package in 7ms + pydantic==2.6.4 real 0m0.045s user 0m0.018s sys 0m0.047s
my problem with your data, is that while you can shave off a few seconds of overhead here and there, I don't think there's going to be big savings for typical jobs where most of the time is is dominated by doing work (installing, running scripts, retrieving from the network) all this python penalty will be in the margins, old dnf is already a ton of c code via hawkey librpm and whatever else is there.
of course I will switch to it once dnf stops showing up in the repos, but i think this effort is being oversold, and i'm pretty bitter that I had to upgrade my cloud boxes _just_ to support package upgrades, and switching to dnf5 didn't change that fact at all.
Comment
Comment