Originally posted by Alex/AT
View Post
Announcement
Collapse
No announcement yet.
Massive ~2.3k Patch Series Would Improve Linux Build Times 50~80% & Fix "Dependency Hell"
Collapse
X
-
Originally posted by RedEyed View Post1. Don't use "literally" if you don't know its meaning.
2. ccache gives A LOT. Do you know how it works, and have you ever tried it?
2. I use it every working day. Literally. Any header file change results in related compile cache files invalidation for every file using this header. In a large project this means ccache cache is invalidated extremely frequently.
3. It's not about not 'giving A LOT'. It's about header file dependencies. Just in case you are still unable to grasp this.
- Likes 5
Comment
-
I can almost feel the future despair from users of out-of-tree modules. Bye nvidia-legacy drivers, Realtek USB Wlan, zenpower, hid-xpadneo, ZFS, PDS, fsync, AMD p-state … seems like a forceful relationship pause is incoming this year…
Well, I am hopeful that it won't turn out *that* bad, but using 30 patches on top of gentoo-sources has me a bit worried in this regard.
Comment
-
Originally posted by microcode View Postzapcc is clang-based, and seems like it would be a real mess for kernel dev, compared to the status quo. Yes, kernel builds could stand to be faster, but the current experience of building the kernel is very straightforward; it would be much more complicated with a tool like that.
"It's a massive patch series and likely the single biggest Linux kernel feature ever by code size. For now though it's being initially sent out as a "request for comments".
Every bigger change in project of this size have to have gain(s) big enough to stand chance to be included, and also satisfy some other requirements (like stability/portability etc). For change in Linux kernel (or any open source project of this size) dev toolchain you would need to have e.g.:
- better compiler (faster)
- compiler is ported and well tested on all architectures project builds currently
- proven, meaning on market long enough, and that measures in decades, meaning at least 5 years (half decade) , but to be safe, 10 years
- and finally (after all above is satisfied), gain(s) by change need to be greater than problems/resist to change, meaning e.g. compilation times should be shorter enough to overcome work needed to implement change/change of habits of key maintainers.
Having that in mind, gcc probably won't be changed for zapcc or anything else anytime soon. That doesn't mean that change is impossible, but probability is low. Check BitKeeper/git story, almost 20 years ago, which is example that change is possible, if needed.
Ingo Molnar is clearly experienced enough, and did not submit his changes as (one) patch, but as RFC. Although this is not change of habits/compilation process, but this is basically one large set of commits (changes of code), or "just" - simple code refactoring (on big scale). He's aware that change of this size is not possible overnight.
I envision that RFC will eventually be accepted, but only after two "simple" requirements are met:
1. Porting the per_task() infrastructure to all currently supported architectures.
2. Check of this gigantic patch set by maintainers, up to the last letter.
And that will not happen in a day or two.
Comment
-
Originally posted by Alex/AT View Post1. Don't don't me.
2. I use it every working day. Literally. Any header file change results in related compile cache files invalidation for every file using this header. In a large project this means ccache cache is invalidated extremely frequently.
3. It's not about not 'giving A LOT'. It's about header file dependencies. Just in case you are still unable to grasp this.
- Likes 1
Comment
-
Originally posted by kiffmet View PostI can almost feel the future despair from users of out-of-tree modules.
But well, ones doing that and not pushing changes upstream have already been doing that in regards to every (related) kernel internals change, and the headers changes proposed are even not the most challenging, it's only a matter of layout, not huge functional changes.
- Likes 3
Comment
-
Originally posted by NobodyXu View PostNow that I understand why @atomsymbol says that this is substituting algorithms that are missing from C, though I cannot agree with that.
This headers change do not substitute any algorithms, because no algorithms in can automatically fix this.
Once a header is included, it has to be parsed, no algorithms can cherry pick headers to speed up parsing, since it violates C/C++ standard.
C++ modules are supposed to improve it, but that's been pretty slow to materialize and may never end up in C.
Comment
-
I just wonder why this wasn't broken up into more stages of re-factoring. Perhaps the initial goal was more modest, but each new round of cleanups exposed more opportunities and the changes simply ballooned until they touched nearly everything.
It'd be easier, less painful, and lower-risk to make such changes in multiple stages.
- Likes 1
Comment
Comment