Announcement

Collapse
No announcement yet.

Linus Torvalds Just Made A Big Optimization To Help Code Compilation Times On Big CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    This might help particularly well with icecc or distcc builds, where you run -j100 locally with many many make jobs, but not many compile jobs.

    Comment


    • #12
      Originally posted by xinorom View Post

      Make has been an exemplar of good parallelism since long before multi-threaded apps were commonplace. This patch is improving kernel performance in a way that Make has no control over. Even Torvalds acknowledges this in the commit message and also that the "locking tokens" pattern used by the Make jobserver is a legitimate (albeit not typical) use of pipes.

      tl;dr what you say is the exact opposite of reality and you clearly have no clue what you're talking about.

      Ninja has some advantages over Make, but they're mostly to do with how it computes which files have changed -- not parallelism.
      Well, ninja calculates dependencies faster and in my experience does directory descent faster.That can help it fan out faster than make, but it depends on circumstances, and being better than make, doesn't mean make isn't pretty fucking great, in particular for 40 year old technology.

      Comment


      • #13
        Originally posted by andyprough View Post
        Linus and ESR getting their hands dirty within weeks of each other. What's next? RMS writing emacs from scratch a third time?
        Linus and ESR are leagues apart in terms of competence. Linus is genius tier, whereas ESR is brainlet tier, at best.
        Last edited by xinorom; 09 February 2020, 05:18 AM.

        Comment


        • #14
          Originally posted by zboszor View Post
          Now all projects can start reviving their autotools build systems.
          Also, I wonder why people who opposed systemd ("it's not the traditional UNIX way") didn't oppose introducing a new build system based on an obscure new language.
          After all, shell scripts and Makefiles are the traditional UNIX way. :-)
          Because the vast majority of people don't build and don't code. The vast majority of those complainers are just edgy teenagers who think installing a Linux distro turns them into "1337 haxxors" and they complain about stuff just to complain and waste our time. Systemd is far more visible to them than make.

          Comment


          • #15
            Sorry to be a bit pedantic here, and English is not my native language, but wouldn't the GNU Make job-server be a benefactee here, as opposed to a benefactor?

            Comment


            • #16
              Originally posted by carewolf View Post
              This might help particularly well with icecc or distcc builds, where you run -j100 locally with many many make jobs, but not many compile jobs.
              How/why?

              Comment


              • #17
                Originally posted by SteamPunker View Post
                Sorry to be a bit pedantic here, and English is not my native language, but wouldn't the GNU Make job-server be a benefactee here, as opposed to a benefactor?
                Or beneficiary even...

                Comment


                • #18
                  Hiya - I'm the original author/inventor of the GNU Make jobserver code. Fyi, this use of pipes was chosen because pipes exist on every POSIX platform in existence. Back in 1991 when I developed it it was the only IPC mechanism you could count on being available on the plethora of Unix flavors out there, as well as many non-Unix RTOS and other more exotic systems. It also was guaranteed to have the desired semantics - single byte reads and writes are atomic - which meant it could be used without needing to rely on any other OS- or version-specific APIs. So yes, from a software quality standpoint - simplicity, reliability, portability, correctness - this isn't just a *good* use of pipes, it's a *perfect* use. There are *no* other mechanisms that fulfill all of those points.

                  And the patch being described in this article is arguably fixing a long-standing Linux bug, the same kind of thundering-herd phenomenon that had already been eradicated from other IPC mechanisms. https://en.wikipedia.org/wiki/Thundering_herd_problem

                  [edit] Linus talking about the bug in more detail here.
                  Last edited by highlandsun; 09 February 2020, 08:22 AM.

                  Comment


                  • #19
                    I've stopped using the jobserver of Make and instead use "-j -l <NCPUs>". The -l option was designed to use the load average, but it now uses the live count of active threads and processes on Linux. It avoids the pipe mechanism and allows multiple Make processes to run on the same machine without the need to know about each other and having to communicate to control the process count. It actually lets different users run multiple build processes independently on the same server as long as they all use -l to respect a system-wide threshold, i.e. the number of CPUs, which otherwise can cause the system to overload and build scripts to fail when several users are trying to compile on the same server.

                    Comment


                    • #20
                      In case anybody else is curious, this is the related fix in GNU Make: http://git.savannah.gnu.org/cgit/mak...93f5225b400714

                      Comment

                      Working...
                      X