Announcement

Collapse
No announcement yet.

Fedora Stakeholders Debate Statically Linking Python For Better Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oleid
    replied
    Originally posted by Ifr2 View Post
    If there is something cheap in this world, that is disk space. I get that a ton of Python programs aren't even 3MB, so this change would more than double their size, but for such a potential improvement of performance is a no brainer to me.
    This is not about statically linking libpython to python scripts, it's about statically linking libpython to the python executable - the interpreter itself. The scripts don't change _at all_.

    There are other programs, which use libpython, i.e. some $WINDOW_MANAGER, which uses it for internal scripting stuff. They will possibly get statically linked, too.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by rene View Post

    instead of having the shared object mapped once in each process using it, you would have a copy-on-write modified version in each process for the relocated jumps all over the place.
    Why? The library will still jump to the same addresses regardless of how many processes that link to it. I don't see how the stubs solves anything other than making the initial load of a binary faster.

    Leave a comment:


  • HadrienG
    replied
    Originally posted by Vistaus View Post
    I thought Fortran was the choice of many scientists???
    It has long been, and remains a timeless classic in areas where maximal performance is required like HPC. But these days, in most areas of scientific software, Fortran tends to be displaced by higher-level languages like Matlab or more recently Python and Julia.

    Leave a comment:


  • andyprough
    replied
    If they want better performance - maybe they should just use something that is not Fedora??

    Leave a comment:


  • Vistaus
    replied
    Originally posted by HadrienG View Post
    If you're looking for a researcher-friendly language in use cases where CPU performance matters, you should probably be investigating Julia, not Python.
    I thought Fortran was the choice of many scientists???

    Leave a comment:


  • rene
    replied
    Originally posted by starshipeleven View Post
    Are you rebuilding your distro from your personal PC?

    I don't even have my own distro but I have a dedicated "buildserver" for my OpenWrt stuff (an old shit PC with a bunch of old 512GB hard drives in RAID0), and with ccache it's pretty decent.
    yes, I have my own distribution (https://t2sde.org), and the last time I used ccache was a decade ago, as testing all the various compiler updates, variants (gcc vs. clang) and c libraries (glib, must) instantly invalidate all the ccache anyway. And even if you only build amd65/glibc/gcc then each minor update here and there invalidates most of it anyway. Not to mention hat some new stuff (e.g. rust) does not even cache very well yet (or at all). I have a 64GB RAM Ryzen 2xxx, waiting for the 3950X for the next compile speedup: https://www.youtube.com/watch?v=1mkf0O-f4hU

    Leave a comment:


  • rene
    replied
    Originally posted by starshipeleven View Post
    It's always a tradeoff between things. It's just that the "wasted disk and RAM space" weights less now in (near)2020 than back in the day.
    Well, I also find recompiling all the applications and libraries on each daily security issue( https://www.youtube.com/watch?v=vPlzP_aQB3Y) a bit time, CPU cycle and energy consuming. For that alone I would still prefer something like shared objects.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by rene View Post
    Despite each, even fractional copy of a library inside each and every executable is space wasted over the shared object approach invented (among other things) exactly for this reason, ...
    It's always a tradeoff between things. It's just that the "wasted disk and RAM space" weights less now in (near)2020 than back in the day.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by rene View Post
    Well, I for one don't want GB of ccache on my system, plus if you rebuild more stuff usually some header changed a little bit here and there causing large scale cache invalidation anyway, ...
    Are you rebuilding your distro from your personal PC?

    I don't even have my own distro but I have a dedicated "buildserver" for my OpenWrt stuff (an old shit PC with a bunch of old 512GB hard drives in RAID0), and with ccache it's pretty decent.

    Leave a comment:


  • rene
    replied
    Originally posted by Raka555 View Post

    There won't be copy-on-write versions of the library in memory of each process.
    The code that was linked from the library will be heavily optimized and integrated into the executable at a fraction of the size it were in the library.
    I thought this thread was talking about shared objects, it sound more like you talk about static libraries. Also "fraction of the size it where in the library" sound a bit like a myth, as most libraries have plenty of internal dependencies, like not causing that large "savings" for a non-trivial more than "hello world" applications. Despite each, even fractional copy of a library inside each and every executable is space wasted over the shared object approach invented (among other things) exactly for this reason, ...

    Leave a comment:

Working...
X