Announcement
Collapse
No announcement yet.
Linux 6.3 Improvements Yield Better Chances Of Successfully Compiling The Kernel With ~32GB RAM
Collapse
X
-
I remember compiling everything from scratch back in the 2000s on machines with 512MB RAM and only 1-2 cores. Sure it might take a while (6-24hrs), but RAM wasn't a hard limit to compiling the kernel. What changed that there is now a RAM related failures?
- Likes 1
-
Originally posted by andyprough View PostIn other words, "Why don't our computers stop us from doing stupid things?"
Originally posted by sinepgib View PostMy guess is they would need to both know about other jobs in the first place (typically you compile one file at a time in different processes for C) and have yet another platform dependent module to get this reading. It's also hard to say what the real usage is when you account for swap, as you may see a lower RAM usage due to the rest of the system already being in a thrashing situation and you wouldn't hold back. Maybe the kernel could freeze a process in a cgroup on page fault when it reaches a percentage of the assigned memory instead, so you would fire up everything in a cgroup so they act as a unit and let the processes freeze. You would still need to guarantee at least one process runs so eventually memory gets freed tho.
Freezing is a decent idea but it's not particularly fast and depending how the compiler identifies jobs, it might just spawn new ones in place of the frozen job.
For what it's worth, I was thinking my idea would require a user input, in the same way the -j flag does. So, if people don't specify the flag, it'll just spawn as many jobs as it can, and disregards RAM usage. Of course, it doesn't necessarily know if a job will only use a few KB vs hundreds of MB, which is why the user would want to yield a decent buffer (such as stop spawning jobs after 80% usage).Last edited by schmidtbag; 02 March 2023, 01:31 PM.
- Likes 1
Leave a comment:
-
Originally posted by sinepgib View Post
Well, in this case a technical user is to be expected, but most of the time it isn't, so trying is not necessarily a bad idea. Ideally a machine does its task in the least confusing way for the user, this involves abstracting memory issues as much as practically possible.
And if you're a capable user, set up your machine accordingly to consume the resources it provides.
If you want to be on the safe side, you can wrap your compilation in a systemd-run instance to X GB of memory total, thus preventing an OOM situation.
(It's what I'm doing.)
Should it fail, you shouldn't lose most your progress (since make reuses the already compiled stuff) and start again with adjusted limits.
- Likes 1
Leave a comment:
-
Originally posted by andyprough View PostIn other words, "Why don't our computers stop us from doing stupid things?"
- Likes 1
Leave a comment:
-
Originally posted by schmidtbag View PostAs another thought:
Why don't compilers have a feature to hold back additional jobs if your RAM usage exceeds a certain percentage? I would so much rather temporarily lose a thread here and there than to run out of memory.
Leave a comment:
-
Originally posted by schmidtbag View PostAs another thought:
Why don't compilers have a feature to hold back additional jobs if your RAM usage exceeds a certain percentage? I would so much rather temporarily lose a thread here and there than to run out of memory.
- Likes 2
Leave a comment:
-
Originally posted by Berniyh View Post12-16 cores aren't uncommon these days even in consumer systems. I doubt that many of those have more than 32 GB, though.
So 20-30 jobs could 32 GB RAM could be a very common thing.
But I don't think that the number of jobs is the problem here.
As another thought:
Why don't compilers have a feature to hold back additional jobs if your RAM usage exceeds a certain percentage? I would so much rather temporarily lose a thread here and there than to run out of memory.
- Likes 1
Leave a comment:
-
Originally posted by schmidtbag View PostI'm a little confused by this, because isn't RAM usage when compiling correlated to number of threads? I don't see how an 11 year old laptop with perhaps only 8 threads has the potential to use up tens of GB of RAM unless for some reason you set the -j flag to something way larger than 8. Meanwhile, I imagine if you were to compile on something like a 96 core Epyc, you'd be using a lot more than 32GB.
but the thing that uses the most ram is when you try to combine all those *.o into a binary file thats when it goes to 4-8gb quickly and taxes the drives hard with some swap and oom thrown in you get quickly 16gb full throw in some vfs cache bang you have 32 gb.
so if you optimize that step you save alot of time of swapping etc. on low ram devices and low ram would be 8gb here, and no i never build a allyes kernel i cant even imagein how big it is.
Leave a comment:
-
Originally posted by Brook-trout View PostLinux developers would be hailed as heros to come up with a utility that would build a list of modules required based on your hardware and the Software you will be employing. A Linux kernel built and tuned for your machine. Is the best of all in a working environment especially if you are deploying multiple identical hardware computers.
- Likes 4
Leave a comment:
-
Originally posted by Brook-trout View PostLinux developers would be hailed as heros to come up with a utility that would build a list of modules required based on your hardware and the Software you will be employing. A Linux kernel built and tuned for your machine. Is the best of all in a working environment especially if you are deploying multiple identical hardware computers.
Basically if you do that, you're issue tracker will be full of people complaining that their system won't start and that it's your fault, because you didn't consider some corner case.
That trouble isn't really worth the relatively low gain you would actually get from compiling your own kernel with a specific configuration.
Sure, there are a few options that optimize specific for your system, but to get that, it would be much easier and more fail-proof to just pull the default config of your distribution and change those few options. Can also be done automatically via a small script, if you insist on automation.
- Likes 1
Leave a comment:
Leave a comment: