Announcement

Collapse
No announcement yet.

Linux 6.3 Improvements Yield Better Chances Of Successfully Compiling The Kernel With ~32GB RAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmidtbag View Post
    Well, it seems to me the compilers are aware of how many jobs are operating, since it never exceeds the limit that is set.
    Not really. It's typically make that is limiting how many jobs are started. The compiler only knows you asked it to compile one source file.

    Originally posted by schmidtbag View Post
    It's not hard for a program to read RAM usage; swap isn't really suitable for the purposes of compiling so that might as well be disregarded.
    While this is true for the compiler, in this case what you're trying to optimize for is the system. If other applications are thrashing and if swap is enabled because the compiler coexists with those applications things become less predictable, the kernel still wastes a lot of time in trying to find memory for the other programs instead of running your compiler, etc.

    Originally posted by schmidtbag View Post
    Freezing is a decent idea but it's not particularly fast and depending how the compiler identifies jobs, it might just spawn new ones in place of the frozen job.
    You use make to limit the number of jobs, just pass the right flag. You would freeze as many processes on the cgroup as you need, so whether it loads more processes doesn't matter that much: those will be frozen too.
    Regarding speed, I'm not sure why it wouldn't be fast. It's just switching queues to blocked or something like that.

    Originally posted by schmidtbag View Post
    For what it's worth, I was thinking my idea would require a user input, in the same way the -j flag does. So, if people don't specify the flag, it'll just spawn as many jobs as it can, and disregards RAM usage. Of course, it doesn't necessarily know if a job will only use a few KB vs hundreds of MB, which is why the user would want to yield a decent buffer (such as stop spawning jobs after 80% usage).
    All I'm saying is that this is more practical to do limiting the scope to a cgroup rather than with heuristics about whole-system behavior.

    Comment


    • #22
      Originally posted by Berniyh View Post
      Compiling a kernel (or anything really) is not something an ordinary user should do.

      And if you're a capable user, set up your machine accordingly to consume the resources it provides.
      If you want to be on the safe side, you can wrap your compilation in a systemd-run instance to X GB of memory total, thus preventing an OOM situation.
      (It's what I'm doing.)
      Should it fail, you shouldn't lose most your progress (since make reuses the already compiled stuff) and start again with adjusted limits.
      Which is why I said in this specific case you expect a technical user. Re: systemd-run, I think that's pretty much what I suggested.

      Comment


      • #23
        Originally posted by Brook-trout View Post
        Linux developers would be hailed as heros to come up with a utility that would build a list of modules required based on your hardware and the Software you will be employing. A Linux kernel built and tuned for your machine. Is the best of all in a working environment especially if you are deploying multiple identical hardware computers.
        This already exists: make localmodconfig (or make localyesconfig if you prefer built in), I maintain my own kernel for my own devices and just keep on adding on exactly what is required for each new device I cover.

        Comment


        • #24
          Originally posted by Berniyh View Post
          Edit: btw the gzip compressed image is 209 MB big. Not something you want to use.
          Thanks, this is what I was looking for. I wonder what the uncompressed total is.

          I can imagine being back in 1997 and swapping 150 boot floppies when the computer restarts.

          Comment


          • #25
            Originally posted by bearoso View Post
            Thanks, this is what I was looking for. I wonder what the uncompressed total is.

            I can imagine being back in 1997 and swapping 150 boot floppies when the computer restarts.
            young whipper snapper, back in the day we swapped floppy sides and shoveled snowed, and it built character. :P

            Comment


            • #26
              Originally posted by bearoso View Post
              Thanks, this is what I was looking for. I wonder what the uncompressed total is.

              I can imagine being back in 1997 and swapping 150 boot floppies when the computer restarts.
              The uncompressed one was 1.7 GB, iirc.

              Comment


              • #27
                Originally posted by onlyLinuxLuvUBack View Post
                young whipper snapper, back in the day we swapped floppy sides and shoveled snowed, and it built character. :P
                Ha. You laugh, but in those days I was stupid, and I didn't want to install a bootloader and overwrite Windows's. So for a few years I used a boot floppy. It got to the point where I had to custom compile a smaller kernel to stay under 1.4MB. Looking back, there were a lot of alternatives, but how would I have known?

                Comment


                • #28
                  Originally posted by sinepgib View Post
                  modprobed-db is preferable else you may miss modules not currently loaded

                  Comment


                  • #29
                    Localyesconfig doesnt use lspci or lsusb unfortunately. And using localyesconfig is useless when you dont use modules.
                    Also make should support limiting build jobs by RAM usage, it supports limiting by cpu load so why not ram.

                    Comment


                    • #30
                      schmidtbag Actually, zswap and zram can be used to compress unused ram, which can be very helpful for compiling.
                      I used to be building firefox/llvm/gcc/linux on Gentoo with 16G of ram.
                      It often ran OOM, so I configured zswap and since then it can compile these projects recently fast without running into OOM.

                      Comment

                      Working...
                      X