Originally posted by rene
View Post
Announcement
Collapse
No announcement yet.
Fedora Stakeholders Debate Statically Linking Python For Better Performance
Collapse
X
-
Originally posted by F.Ultra View Post
But even on fork() the binary would still call the same address in the library so why would it have to be modified on CoW? (I must be missing something here).
- Likes 1
Comment
-
Originally posted by F.Ultra View Post
But this is a distribution we are talking about here, their build scripts will rebuild all the statically linked packages when there is an update to the underlying libraries.
Because applying the same change to 2 binaries is stupid when you could do that to just one. And it scales to eternity with the number of static links you have. Why on Earth would you do that? Why wouldn't you fix the actual problem instead, that is, improve dynaload performance? The whole world started using dynamic linking because the performance impact was proved to be negligible. How is it that Python still manages to fail in this regard? Surely it's not their fault, and the whole world is wrong instead?Last edited by anarki2; 08 November 2019, 08:51 AM.
- Likes 2
Comment
-
Originally posted by ermo View Post
(emphasis mine)
I was under the impression that Go explicitly encourages statically linked binaries? I could be wrong, but doesn't that entail the exact same issue that you're decrying wrt. the bad old days of MS software development?
Note that I have absolutely no beef with Go.
If you still can't tell if that was sarcasm or not, let me help you out: GDI and msvcrt.dll were just metaphores and examples of the underlying problem.
Regarding Go, I haven't really looked into it, if they do that, I retract my statement about that.Last edited by anarki2; 08 November 2019, 08:48 AM.
- Likes 1
Comment
-
Originally posted by anarki2 View PostRegarding Go, I haven't really looked into it, if they do that, I retract my statement about that.
Comment
-
-
Originally posted by Britoid View PostDo you think bundling libssl, gnutls, glibc, curl etc into each application would be wise?
This is nearly 2020. We need sandboxing by default for all applications anyway, besides in an opensource distro you can get away with your buildbots rebuilding stuff at the tip of a hat.
Decent package managers can already deal with delta updates too, so download bandwith won't change much.
Comment
-
Originally posted by anarki2 View PostThe problem with redundancy is never space, it's rather change management. If there's a vulnerability in libpython, the dudes might patch that but forget about the statically linked stuff in python itself. Or in any other statically linked binary on the system, for that matter. It becomes completely impossible to track. You should never have several copies of the same library. That's why dynamic linking exists in the first place. We've already learned that the hard way, decades ago. Perhaps the most emblematic example is the GDI one:
https://docs.microsoft.com/en-us/sec.../2004/ms04-028
Back in the day, everyone just bundled DLLs instead of proper linking. That's why you had like 500 copies of msvcrt.dll. 99% of them containing vulnerabilities thanks to lazy developers not keeping their stuff up-to-date. Because devs only care about the software "working", and they always forget about the IT part, where it should also stay reliable,secure, and manageable.
And now these clever folks reinvent that (ugly) wheel once again. Instead of fixing dynaload performance issues for their crappy binary. GG. When there's so many so fundamental issues with Python these days, don't be surprised how fast Go takes over.
OpenSUSE Tumbleweed is using a similar system, each time a library is updated all software relying on it is recompiled and run through the automated QA, once that is cleared, the updated library and the recompiled software are loaded in the repos as updates.
Comment
-
Originally posted by Ifr2 View PostIf there is something cheap in this world, that is disk space. I get that a ton of Python programs aren't even 3MB, so this change would more than double their size, but for such a potential improvement of performance is a no brainer to me.
But a statically linked app loads with sequential reads while dynamic linked app needs to do a lot of seeks. So HDD will have less of a performance hit.
I prefer statically linked apps over packaged flatpak/snaps. You have the same mess with security updates where everything needs to be rebuild/repackaged.
Statically linked apps can actually optimize unused code from the library away (if the library was properly written), while a dynamic lib contains everything from the library in memory. The unused parts latter gets released under memory pressure.
I would very much like to see a distro where everything is statically linked, except for the stuff that really needs dynamic linking.
- Likes 2
Comment
-
Originally posted by anarki2 View PostYes. Unless they forget to do that on each and every statically linked package. Or are you implying packages and build scripts are always perfect?
This won't change for a statically linked distro because the build system still needs this information.
If the dependency manifest is wrong you either have a missing dependency OR useless dependencies
"forgetting to update statically linked packages" is not possible unless the build system is complete garbage and skips some packages at random.
- Likes 1
Comment
Comment