schmidtbag Actually, zswap and zram can be used to compress unused ram, which can be very helpful for compiling.
I used to be building firefox/llvm/gcc/linux on Gentoo with 16G of ram.
It often ran OOM, so I configured zswap and since then it can compile these projects recently fast without running into OOM.
Announcement
Collapse
No announcement yet.
Linux 6.3 Improvements Yield Better Chances Of Successfully Compiling The Kernel With ~32GB RAM
Collapse
X
-
Localyesconfig doesnt use lspci or lsusb unfortunately. And using localyesconfig is useless when you dont use modules.
Also make should support limiting build jobs by RAM usage, it supports limiting by cpu load so why not ram.
Leave a comment:
-
Originally posted by sinepgib View Post
That's called "localyesconfig" https://www.kernel.org/doc/Documenta...ide/README.rst
- Likes 1
Leave a comment:
-
Originally posted by onlyLinuxLuvUBack View Postyoung whipper snapper, back in the day we swapped floppy sides and shoveled snowed, and it built character. :P
Leave a comment:
-
Originally posted by bearoso View PostThanks, this is what I was looking for. I wonder what the uncompressed total is.
I can imagine being back in 1997 and swapping 150 boot floppies when the computer restarts.
Leave a comment:
-
Originally posted by Berniyh View PostEdit: btw the gzip compressed image is 209 MB big. Not something you want to use.
I can imagine being back in 1997 and swapping 150 boot floppies when the computer restarts.
- Likes 2
Leave a comment:
-
Originally posted by Brook-trout View PostLinux developers would be hailed as heros to come up with a utility that would build a list of modules required based on your hardware and the Software you will be employing. A Linux kernel built and tuned for your machine. Is the best of all in a working environment especially if you are deploying multiple identical hardware computers.
Leave a comment:
-
Originally posted by Berniyh View PostCompiling a kernel (or anything really) is not something an ordinary user should do.
And if you're a capable user, set up your machine accordingly to consume the resources it provides.
If you want to be on the safe side, you can wrap your compilation in a systemd-run instance to X GB of memory total, thus preventing an OOM situation.
(It's what I'm doing.)
Should it fail, you shouldn't lose most your progress (since make reuses the already compiled stuff) and start again with adjusted limits.
Leave a comment:
-
Originally posted by schmidtbag View PostWell, it seems to me the compilers are aware of how many jobs are operating, since it never exceeds the limit that is set.
Originally posted by schmidtbag View PostIt's not hard for a program to read RAM usage; swap isn't really suitable for the purposes of compiling so that might as well be disregarded.
Originally posted by schmidtbag View PostFreezing is a decent idea but it's not particularly fast and depending how the compiler identifies jobs, it might just spawn new ones in place of the frozen job.
Regarding speed, I'm not sure why it wouldn't be fast. It's just switching queues to blocked or something like that.
Originally posted by schmidtbag View PostFor what it's worth, I was thinking my idea would require a user input, in the same way the -j flag does. So, if people don't specify the flag, it'll just spawn as many jobs as it can, and disregards RAM usage. Of course, it doesn't necessarily know if a job will only use a few KB vs hundreds of MB, which is why the user would want to yield a decent buffer (such as stop spawning jobs after 80% usage).
Leave a comment:
Leave a comment: