Announcement

Collapse
No announcement yet.

Linux 6.3 Improvements Yield Better Chances Of Successfully Compiling The Kernel With ~32GB RAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Brook-trout View Post
    Linux developers would be hailed as heros to come up with a utility that would build a list of modules required based on your hardware and the Software you will be employing. A Linux kernel built and tuned for your machine. Is the best of all in a working environment especially if you are deploying multiple identical hardware computers.
    That's an idea that sounds good at first, but if you think the scenario through, there's a lot of trouble ahead.
    Basically if you do that, you're issue tracker will be full of people complaining that their system won't start and that it's your fault, because you didn't consider some corner case.
    That trouble isn't really worth the relatively low gain you would actually get from compiling your own kernel with a specific configuration.

    Sure, there are a few options that optimize specific for your system, but to get that, it would be much easier and more fail-proof to just pull the default config of your distribution and change those few options. Can also be done automatically via a small script, if you insist on automation.

    Comment


    • #12
      Originally posted by Brook-trout View Post
      Linux developers would be hailed as heros to come up with a utility that would build a list of modules required based on your hardware and the Software you will be employing. A Linux kernel built and tuned for your machine. Is the best of all in a working environment especially if you are deploying multiple identical hardware computers.
      That's called "localyesconfig" https://www.kernel.org/doc/Documenta...ide/README.rst

      Comment


      • #13
        Originally posted by schmidtbag View Post
        I'm a little confused by this, because isn't RAM usage when compiling correlated to number of threads? I don't see how an 11 year old laptop with perhaps only 8 threads has the potential to use up tens of GB of RAM unless for some reason you set the -j flag to something way larger than 8. Meanwhile, I imagine if you were to compile on something like a 96 core Epyc, you'd be using a lot more than 32GB.
        This objtool patch is about the endphase of compiling the kernel where the vmlinux.o is created it has nothing to do with make -j threads, make -j is jobs per make so you use the cpu cores better basically every gcc ld as uses only one thread, yes more threads use more ram.

        but the thing that uses the most ram is when you try to combine all those *.o into a binary file thats when it goes to 4-8gb quickly and taxes the drives hard with some swap and oom thrown in you get quickly 16gb full throw in some vfs cache bang you have 32 gb.

        so if you optimize that step you save alot of time of swapping etc. on low ram devices and low ram would be 8gb here, and no i never build a allyes kernel i cant even imagein how big it is.

        Comment


        • #14
          Originally posted by Berniyh View Post
          12-16 cores aren't uncommon these days even in consumer systems. I doubt that many of those have more than 32 GB, though.
          So 20-30 jobs could 32 GB RAM could be a very common thing.
          But I don't think that the number of jobs is the problem here.
          Makes sense, but a good rule of thumb is 1GB per thread for casual users, 1.5GB per thread for advanced users, and 2GB per thread for power users. Of course, it's someone else's thumb when we're talking about servers. Compiling a kernel isn't a task for casuals. So, 32GB on a 16c/32t CPU is, to me, a little on the low side.

          As another thought:
          Why don't compilers have a feature to hold back additional jobs if your RAM usage exceeds a certain percentage? I would so much rather temporarily lose a thread here and there than to run out of memory.

          Comment


          • #15
            Originally posted by schmidtbag View Post
            As another thought:
            Why don't compilers have a feature to hold back additional jobs if your RAM usage exceeds a certain percentage? I would so much rather temporarily lose a thread here and there than to run out of memory.
            In other words, "Why don't our computers stop us from doing stupid things?"

            Comment


            • #16
              Originally posted by schmidtbag View Post
              As another thought:
              Why don't compilers have a feature to hold back additional jobs if your RAM usage exceeds a certain percentage? I would so much rather temporarily lose a thread here and there than to run out of memory.
              My guess is they would need to both know about other jobs in the first place (typically you compile one file at a time in different processes for C) and have yet another platform dependent module to get this reading. It's also hard to say what the real usage is when you account for swap, as you may see a lower RAM usage due to the rest of the system already being in a thrashing situation and you wouldn't hold back. Maybe the kernel could freeze a process in a cgroup on page fault when it reaches a percentage of the assigned memory instead, so you would fire up everything in a cgroup so they act as a unit and let the processes freeze. You would still need to guarantee at least one process runs so eventually memory gets freed tho.

              Comment


              • #17
                Originally posted by andyprough View Post
                In other words, "Why don't our computers stop us from doing stupid things?"
                Well, in this case a technical user is to be expected, but most of the time it isn't, so trying is not necessarily a bad idea. Ideally a machine does its task in the least confusing way for the user, this involves abstracting memory issues as much as practically possible.

                Comment


                • #18
                  Originally posted by sinepgib View Post

                  Well, in this case a technical user is to be expected, but most of the time it isn't, so trying is not necessarily a bad idea. Ideally a machine does its task in the least confusing way for the user, this involves abstracting memory issues as much as practically possible.
                  Compiling a kernel (or anything really) is not something an ordinary user should do.

                  And if you're a capable user, set up your machine accordingly to consume the resources it provides.
                  If you want to be on the safe side, you can wrap your compilation in a systemd-run instance to X GB of memory total, thus preventing an OOM situation.
                  (It's what I'm doing.)
                  Should it fail, you shouldn't lose most your progress (since make reuses the already compiled stuff) and start again with adjusted limits.

                  Comment


                  • #19
                    Originally posted by andyprough View Post
                    In other words, "Why don't our computers stop us from doing stupid things?"
                    lol yup. Though in this case, it's not necessarily predictable how much memory the compiler is going to use. For something as large as a kernel build, it'd be nice to have a system in place to allow the compilation to complete reliably without either buying more RAM or setting a static lower job count.


                    Originally posted by sinepgib View Post
                    My guess is they would need to both know about other jobs in the first place (typically you compile one file at a time in different processes for C) and have yet another platform dependent module to get this reading. It's also hard to say what the real usage is when you account for swap, as you may see a lower RAM usage due to the rest of the system already being in a thrashing situation and you wouldn't hold back. Maybe the kernel could freeze a process in a cgroup on page fault when it reaches a percentage of the assigned memory instead, so you would fire up everything in a cgroup so they act as a unit and let the processes freeze. You would still need to guarantee at least one process runs so eventually memory gets freed tho.
                    Well, it seems to me the compilers are aware of how many jobs are operating, since it never exceeds the limit that is set. It's not hard for a program to read RAM usage; swap isn't really suitable for the purposes of compiling so that might as well be disregarded.
                    Freezing is a decent idea but it's not particularly fast and depending how the compiler identifies jobs, it might just spawn new ones in place of the frozen job.
                    For what it's worth, I was thinking my idea would require a user input, in the same way the -j flag does. So, if people don't specify the flag, it'll just spawn as many jobs as it can, and disregards RAM usage. Of course, it doesn't necessarily know if a job will only use a few KB vs hundreds of MB, which is why the user would want to yield a decent buffer (such as stop spawning jobs after 80% usage).
                    Last edited by schmidtbag; 02 March 2023, 01:31 PM.

                    Comment


                    • #20
                      I remember compiling everything from scratch back in the 2000s on machines with 512MB RAM and only 1-2 cores. Sure it might take a while (6-24hrs), but RAM wasn't a hard limit to compiling the kernel. What changed that there is now a RAM related failures?

                      Comment

                      Working...
                      X