Announcement

Collapse
No announcement yet.

Fedora 41 Looks To "-O3" Optimizations For Its Python Build

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • amity
    replied
    The only possible downside expressed so far is the possibility of a slightly larger Python package
    I'm not sure about that... I have personally seen -O3 introduce some VERY strange bugs in many programs.

    Leave a comment:


  • piotrj3
    replied
    Originally posted by caligula View Post

    Python users are fucking idiots. At the moment it's one of the slowest languages out there outside legacy stuff like bash/sed/awk/m4/perl/tcl. Java, C#, JavaScript, PHP, Hack, Go, Dart, C/C++, Lua(jit), VB.Net, Pascal, and all the others are much faster. Ruby is another slow language (that is, the main implementation is).
    There is a lot of use cases when benefits of python heavy outweights performance issue. Example : webrowser integration tests. Realistically speaking good selenium/playwright etc. bindings have 4 languages : Java, C#, Python and JS. There are some lesser known but often they are not well maintained and have problems.

    Java, C# force you to write a ton of boiler plate, when you debug and make small change instead of instantly testing change you want it to recompile, rerun what takes time.

    JS - let's be honest JS is not good language. And stuff like TS and some stuff still needs building step.

    Python - you don't write boiler plate, you don't wait for things to build, entire ecosystem is simple and in my expierience dependency selenium requires less often updates than for java.

    I am not doing integration tests for a while but when i asked any tester who did those and tried python always prefered python over competition. 99% of performance lies in webrowser itself that python doesn't affect.

    Leave a comment:


  • Turbine
    replied
    Originally posted by npwx View Post
    Oh what revolutionary changes, using O3 for python builds. Unfortunately, still no subarch support, just the usual "update a bunch of packages". I'm close to the point of dropping Fedora for something else on my servers. It seems development has stalled.
    The fek, just about all package's get updated between releases. 🤔 For servers, there's limited improvements now.

    Leave a comment:


  • Weasel
    replied
    Originally posted by dralley View Post
    Oh for christ's sake, stop whining about frame pointers. They've already basically paid significant dividends in terms of optimizations uncovered (for example https://blogs.gnome.org/chergert/202...ven-us-anyway/), and as someone already mentioned, the XZ backdoor was noticed partly because the exploit payload was compiled without them, which resulted in valgrind errors and made the backdoor look extra suspicious.

    Making inspecting running software easier enables people like Andres Freund to investigate "weird things" that they might otherwise ignore. Even if the cost was 2% flat across the entire ecosystem (which it's not, it's almost always <1% and usually very close to 0% - saying that "all packages" experience a 1-2% performance decrease "if not more" is pure unsubstantiated FUD) it would still be worth it. Democratizing this kind of research is a Good Thing, and it doesn't benefit only the Google's and Facebooks of the world.

    And in the very few packages where it did cause noticeable regressions, the resulting fixes actually improved over the status quo. It turns out that in any situation where frame pointers are a significant impact, you're already at the margin and would benefit from (e.g.) inlining a very small function or breaking up a very large function.
    What a load of rubbish. Imagine using pure lucky coincidence as an argument "because the exploit payload was compiled without them". What if it was compiled with them? What the fuck.

    I don't give a single shit if some GNOME idiot wants frame pointers to sysprof his crap. 99.9% of users don't know how to run sysprof nor need to, nor even attempt to send a perf patch because it will likely be rejected as a "maintenance burden". Since such a fucking tiny minority wants it then he can build his own fucking packages with it and leave everyone else alone.

    Leave a comment:


  • Ray_o
    replied
    Originally posted by user1 View Post
    I mean, the very reason Andres started investigating it was because his SSH login was taking too long.
    That is not true. here what he said in LWN website.

    I didn't even notice it during logging in with ssh or such. I was doing some micro-benchmarking at the time and was looking to quiesce the system to reduce noise. Saw sshd processes were using a surprising amount of CPU, despite immediately failing because of wrong usernames etc. Profiled sshd. Which showed lots of cpu time in code with perf unable to attribute it to a symbol, with the dso showing as liblzma. Got suspicious. Then recalled that I had seen an odd valgrind complaint in my automated testing of postgres, a few weeks earlier, after some package updates were installed. Really required a lot of coincidences.

    Leave a comment:


  • blackshard
    replied
    Originally posted by caligula View Post

    Python is often used for high performance computing as well. Why it works is because parts of the app are actually written in compiled languages. Yea, that can somewhat mitigate the problem, but it didn't prevent people from creating better languages such as Julia for these tasks. It's a much better languages for AI/ML and statistics.
    WIth "part of the app" you mean "CPython library"? The fact that libraries like numpy and scipy exist make things easier for non-programmers (like researchers in physics, chemistry, or other scientific disciplines) to program simple tools with a simple but powerful language to analyze their data.
    That's not Python which is slow, it's its reference implementation, CPython which is.

    Anyway, being slow is the common ground when you talk about interpreted languages, but being easier to debug is indeed a great advantage over compiled languages.

    More on that: CPython offers many ways to interface with external libraries written in other languages (ctypes to import generic .so libs, or native C/C++ written libs), which makes it perfect as a glue language.

    It's a matter of using the right tool for the right task.

    Originally posted by caligula View Post
    I can think of examples where Python was a totally wrong solution due to performance issues. IIRC Mr ESR used Python for Reposurgeon. Eventually the tool had to be rewritten in a better language because it requires a supercomputer to process a single gcc repository. Gentoo's portage has suffered both from the bug-proneness of Python (lacking compile time static checks) and slowness. A package manager is a critical part of any distro. You shouldn't write it in Python. If it does complex solving of dependencies and has large repositories of packages, it will become slow. Mercurial was also written in Python. Damn slow compared to git.
    Mostly depends upon the algorithm you use and the kind of the task. Solving dependencies is IMHO one of the tasks where the algorithm (its complexity and implementation) counts much more than the language you write it into.

    Leave a comment:


  • caligula
    replied
    Originally posted by Myownfriend View Post

    Python uses definitely aren't idiots. Most people who use it don't really use it for performance. They use it because it's quick to work with and a lot of libraries exist for it.
    Python is often used for high performance computing as well. Why it works is because parts of the app are actually written in compiled languages. Yea, that can somewhat mitigate the problem, but it didn't prevent people from creating better languages such as Julia for these tasks. It's a much better languages for AI/ML and statistics.

    I can think of examples where Python was a totally wrong solution due to performance issues. IIRC Mr ESR used Python for Reposurgeon. Eventually the tool had to be rewritten in a better language because it requires a supercomputer to process a single gcc repository. Gentoo's portage has suffered both from the bug-proneness of Python (lacking compile time static checks) and slowness. A package manager is a critical part of any distro. You shouldn't write it in Python. If it does complex solving of dependencies and has large repositories of packages, it will become slow. Mercurial was also written in Python. Damn slow compared to git.

    Leave a comment:


  • user1
    replied
    Originally posted by dralley View Post
    Oh for christ's sake, stop whining about frame pointers. They've already basically paid significant dividends in terms of optimizations uncovered (for example https://blogs.gnome.org/chergert/202...ven-us-anyway/),
    Yes, I know and I've already read that. He's one of the Gnome devs that wanted it enabled. Still doesn't justify enabling it for the entire distro (by "entire", I mean including all the packages that are not preinstalled on Fedora by default) because as I said, it would be of no use for the rest of the packages. If it was just for Gnome and related packages? Fine.

    Originally posted by dralley View Post
    and as someone already mentioned, the XZ backdoor was noticed partly because the exploit payload was compiled without them, which resulted in valgrind errors and made the backdoor look extra suspicious.
    Let's be real here. As bad as the XZ fiasco was, when was the last time something like this has happened before? I actually don't even remember. With, or without them, it would've been noticed anyway. I mean, the very reason Andres started investigating it was because his SSH login was taking too long.

    Originally posted by dralley View Post
    Even if the cost was 2% flat across the entire ecosystem (which it's not, it's almost always <1% and usually very close to 0% - saying that "all packages" experience a 1-2% performance decrease "if not more" is pure unsubstantiated FUD it would still be worth it.
    No, I didn't say literally "all of them". By "all of them" I meant the rest of the packages and I said "potentially", which doesn't necessarily mean all of them experience performance drop. I suggest you read the Fedora discussions about enabling frame pointers. Many, if not most of the Fedora toolchain devs in those discussion were against enabling frame pointers.
    Last edited by user1; 13 April 2024, 12:06 PM.

    Leave a comment:


  • and.elf
    replied
    Originally posted by caligula View Post

    Python users are fucking idiots. At the moment it's one of the slowest languages out there outside legacy stuff like bash/sed/awk/m4/perl/tcl. Java, C#, JavaScript, PHP, Hack, Go, Dart, C/C++, Lua(jit), VB.Net, Pascal, and all the others are much faster. Ruby is another slow language (that is, the main implementation is).
    Wow, just wow. Not ok, man. Just because you don't like it because it's slow, it doesn't make the millions of users idiots

    Leave a comment:


  • dralley
    replied
    Originally posted by user1 View Post

    What still bothers me is how anyone thought it was acceptable to enable frame pointers since Fedora 38 for pretty much everything instead of making it opt-in. Like yeah, I get it, for some Gnome and Facebook devs it's very useful for profiling in order to achieve even greater performance gains in their software. That doesn't explain why was it enable for almost the entire Fedora repo and not just for those who ask it to be enabled for their packages. This way, you will get performance gains for a tiny minority of packages as a result of the ability to profile, but guess what? Nobody is going to do that for the rest of the packages in Fedora repo, so all of them are now potentially experiencing 1-2% performance drop (if not more). And probably no one is going to notice that unless they'll do comparison benchmarks (and I'm sure no one is going to do that either). You may think "oh 1-2% is not a big deal", but you probably haven't heard that even 1% of performance gain may be worth a year of work in GCC.
    Oh for christ's sake, stop whining about frame pointers. They've already basically paid significant dividends in terms of optimizations uncovered (for example https://blogs.gnome.org/chergert/202...ven-us-anyway/), and as someone already mentioned, the XZ backdoor was noticed partly because the exploit payload was compiled without them, which resulted in valgrind errors and made the backdoor look extra suspicious.

    Making inspecting running software easier enables people like Andres Freund to investigate "weird things" that they might otherwise ignore. Even if the cost was 2% flat across the entire ecosystem (which it's not, it's almost always <1% and usually very close to 0% - saying that "all packages" experience a 1-2% performance decrease "if not more" is pure unsubstantiated FUD) it would still be worth it. Democratizing this kind of research is a Good Thing, and it doesn't benefit only the Google's and Facebooks of the world.

    And in the very few packages where it did cause noticeable regressions, the resulting fixes actually improved over the status quo. It turns out that in any situation where frame pointers are a significant impact, you're already at the margin and would benefit from (e.g.) inlining a very small function or breaking up a very large function.
    Last edited by dralley; 13 April 2024, 11:13 AM.

    Leave a comment:

Working...
X