If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Announcement
Collapse
No announcement yet.
Linus Torvalds Just Made A Big Optimization To Help Code Compilation Times On Big CPUs
sdack - interesting. The -l option existed before the jobserver, and it never worked well because load average updates too slowly. By the time it hits your threshold your machine is already thrashing. I would expect the current -l implementation only works on Linux; it would be too expensive to collect this stat on other OSs.
The notion of multiple users running a build process on a single machine is pretty alien these days. Everything is VMs and containers now, with explicitly allocated CPU resources.
I'm always fascinated how severely disconnected from reality some people here are.
Make existed > 30 years. Look at what has happened in the last 30 years. Think about how much everythings changed. We went from 33 Megahertz single core to > 3000 Megahertz multicore.
This affected severely software / technology development.
It's impossible to replace eg make in the Linux Kernel system, it's far to complex to do anything about it.
And the reason why make was successful is because it is built upon the most minimal dependencies - meson / ninja are python dependent... Which is imho not bad per se - but as a requirement for a kernel build? no fucking way.
Not exactly on topic, but I just realized if I took all the cores I owned over the past 20 years and somehow managed to run make on all of them at the same time, I still wouldn't come close to having the problem fixed here
(Yeah, I don't upgrade that often.)
I've stopped using the jobserver of Make and instead use "-j -l <NCPUs>". The -l option was designed to use the load average, but it now uses the live count of active threads and processes on Linux. It avoids the pipe mechanism and allows multiple Make processes to run on the same machine without the need to know about each other and having to communicate to control the process count. It actually lets different users run multiple build processes independently on the same server as long as they all use -l to respect a system-wide threshold, i.e. the number of CPUs, which otherwise can cause the system to overload and build scripts to fail when several users are trying to compile on the same server.
I've been running "-l NCPU -j NCPUx2", or "-l 16 -j 32" on my current system, for the past year. It's a small hair faster than "-j NCPU".
... I would expect the current -l implementation only works on Linux; it would be too expensive to collect this stat on other OSs. ...
It uses the fourth value in /proc/loadavg. If not found does it fall back to whatever implementation for the load average is available on a system, which often is based on the 5min, 10min and 15min averages collected by the system, and yes, those are not very accurate and require Make to implement a heuristic on top of it, which still isn't very accurate. Hence it now uses the active thread and process count where available.
I don't know if any other OS supports /proc/loadavg and its fourth parameter, but I don't see why it would be expensive to implement. Many non-Linux OSes do support some version of the procfs. Make only reads the file when it wants to spawn new processes and otherwise waits for it's children to terminate before attempting to read it again and to spawn new processes. It doesn't loop constantly over the file and burns up idle CPU cycles if that's what you're thinking.
It uses the fourth value in /proc/loadavg. If not found does it fall back to whatever implementation for the load average is available on a system, which often is based on the 5min, 10min and 15min averages collected by the system, and yes, those are not very accurate and require Make to implement a heuristic on top of it, which still isn't very accurate. Hence it now uses the active thread and process count where available.
I don't know if any other OS supports /proc/loadavg and its fourth parameter, but I don't see why it would be expensive to implement. Many non-Linux OSes do support some version of the procfs. Make only reads the file when it wants to spawn new processes and otherwise waits for it's children to terminate before attempting to read it again and to spawn new processes. It doesn't loop constantly over the file and burns up idle CPU cycles if that's what you're thinking.
I know of a few FreeBSD users here I could @ to ask if you really want an answer. I know FreeBSD pulls in /proc with KDE/Plasma. I just didn't know to add myself to the operator and wheel groups so my install left me with a spiffy terminal where I'm unable to install anything
Haven't added myself to the wheel group in so many years I didn't even think about crap like that
Comment