Announcement

Collapse
No announcement yet.

Massive ~2.3k Patch Series Would Improve Linux Build Times 50~80% & Fix "Dependency Hell"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Alex/AT View Post

    In case of some, significant header file, change in this hellish mix of dependencies, ccache will get you approximately and literally nothing.
    That's what this patchset is also going to improve.
    Don't use "literally" if you don't know its meaning.
    Last edited by RedEyed; 03 January 2022, 06:51 AM.

    Comment


    • #22
      Originally posted by RedEyed View Post
      1. Don't use "literally" if you don't know its meaning.
      2. ccache gives A LOT. Do you know how it works, and have you ever tried it?
      1. Don't don't me.
      2. I use it every working day. Literally. Any header file change results in related compile cache files invalidation for every file using this header. In a large project this means ccache cache is invalidated extremely frequently.
      3. It's not about not 'giving A LOT'. It's about header file dependencies. Just in case you are still unable to grasp this.

      Comment


      • #23
        I can almost feel the future despair from users of out-of-tree modules. Bye nvidia-legacy drivers, Realtek USB Wlan, zenpower, hid-xpadneo, ZFS, PDS, fsync, AMD p-state … seems like a forceful relationship pause is incoming this year…

        Well, I am hopeful that it won't turn out *that* bad, but using 30 patches on top of gentoo-sources has me a bit worried in this regard.

        Comment


        • #24
          Time to merge the patch set and release Linux kernel 6.0?

          Comment


          • #25
            It is great to see that kernel developers don't shy away from such a monumental task - the results seem to be worth it.
            Last edited by ms178; 03 January 2022, 09:12 AM.

            Comment


            • #26
              Originally posted by microcode View Post
              zapcc is clang-based, and seems like it would be a real mess for kernel dev, compared to the status quo. Yes, kernel builds could stand to be faster, but the current experience of building the kernel is very straightforward; it would be much more complicated with a tool like that.
              This answer is fine, but readers need to know that most important sentences in article are:
              "It's a massive patch series and likely the single biggest Linux kernel feature ever by code size. For now though it's being initially sent out as a "request for comments".
              Lets tackle first zapcc (as idea):
              Every bigger change in project of this size have to have gain(s) big enough to stand chance to be included, and also satisfy some other requirements (like stability/portability etc). For change in Linux kernel (or any open source project of this size) dev toolchain you would need to have e.g.:
              - better compiler (faster)
              - compiler is ported and well tested on all architectures project builds currently
              - proven, meaning on market long enough, and that measures in decades, meaning at least 5 years (half decade) , but to be safe, 10 years
              - and finally (after all above is satisfied), gain(s) by change need to be greater than problems/resist to change, meaning e.g. compilation times should be shorter enough to overcome work needed to implement change/change of habits of key maintainers.

              Having that in mind, gcc probably won't be changed for zapcc or anything else anytime soon. That doesn't mean that change is impossible, but probability is low. Check BitKeeper/git story, almost 20 years ago, which is example that change is possible, if needed.

              Ingo Molnar is clearly experienced enough, and did not submit his changes as (one) patch, but as RFC. Although this is not change of habits/compilation process, but this is basically one large set of commits (changes of code), or "just" - simple code refactoring (on big scale). He's aware that change of this size is not possible overnight.

              I envision that RFC will eventually be accepted, but only after two "simple" requirements are met:

              1. Porting the per_task() infrastructure to all currently supported architectures.
              2. Check of this gigantic patch set by maintainers, up to the last letter.

              And that will not happen in a day or two.

              Comment


              • #27
                Originally posted by Alex/AT View Post
                1. Don't don't me.
                2. I use it every working day. Literally. Any header file change results in related compile cache files invalidation for every file using this header. In a large project this means ccache cache is invalidated extremely frequently.
                3. It's not about not 'giving A LOT'. It's about header file dependencies. Just in case you are still unable to grasp this.
                Got it, you mean if everything depends on a one header file, and this file is changed, ccache will give nothing. Agree.

                Comment


                • #28
                  Originally posted by kiffmet View Post
                  I can almost feel the future despair from users of out-of-tree modules.
                  Yes, the biggest drawback is that many third party patches will have to be adjusted to the changed layout.
                  But well, ones doing that and not pushing changes upstream have already been doing that in regards to every (related) kernel internals change, and the headers changes proposed are even not the most challenging, it's only a matter of layout, not huge functional changes.

                  Comment


                  • #29
                    Originally posted by NobodyXu View Post
                    Now that I understand why @atomsymbol says that this is substituting algorithms that are missing from C, though I cannot agree with that.
                    This headers change do not substitute any algorithms, because no algorithms in can automatically fix this.

                    Once a header is included, it has to be parsed, no algorithms can cherry pick headers to speed up parsing, since it violates C/C++ standard.
                    I think that's atomsymbol's point, that the C language is bad about this.

                    C++ modules are supposed to improve it, but that's been pretty slow to materialize and may never end up in C.

                    Comment


                    • #30
                      I just wonder why this wasn't broken up into more stages of re-factoring. Perhaps the initial goal was more modest, but each new round of cleanups exposed more opportunities and the changes simply ballooned until they touched nearly everything.

                      It'd be easier, less painful, and lower-risk to make such changes in multiple stages.

                      Comment

                      Working...
                      X