Announcement

Collapse
No announcement yet.

Linux 6.3 Improvements Yield Better Chances Of Successfully Compiling The Kernel With ~32GB RAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by cj.wijtmans View Post
    Also make should support limiting build jobs by RAM usage, it supports limiting by cpu load so why not ram.
    Does it limit by CPU load? I thought it was always controlled only in terms of active jobs with -j. I still think limiting a group of processes by resources is cgroups job. But in any case, make is a more reasonable place than the compiler for that.

    Originally posted by peppercats View Post
    modprobed-db is preferable else you may miss modules not currently loaded
    Interesting.​

    Comment


    • #32
      Originally posted by clevelandrocks View Post
      I remember compiling everything from scratch back in the 2000s on machines with 512MB RAM and only 1-2 cores. Sure it might take a while (6-24hrs), but RAM wasn't a hard limit to compiling the kernel. What changed that there is now a RAM related failures?
      There's a lot more drivers, they got a lot bigger, you're compiling all of them into one binary (no picking your specific hardware, no modules) and trying to optimise the result in expensive ways which require a lot of state.

      Comment


      • #33
        Originally posted by GreenReaper View Post
        and trying to optimise the result in expensive ways which require a lot of state.
        I haven't thought of this, but I guess allyesconfig could disable many optimizations. AFAICT it's not supposed to run but to ensure everything builds only, so the only speed that matters is build speed.

        Comment


        • #34
          Originally posted by sinepgib View Post
          I haven't thought of this, but I guess allyesconfig could disable many optimizations. AFAICT it's not supposed to run but to ensure everything builds only, so the only speed that matters is build speed.
          Well, it will probably use whatever the default options are, which would probably be to make an optimised build. The more options enabled, the more code and symbols there are to consider at once. As the linked patch says, the issue arises "when they try to build the final (large) vmlinux.o."

          Comment


          • #35
            Originally posted by kylew77 View Post
            I have a question that is maybe a bit off topic but in the OpenBSD world they relink the kernel at each boot across all architectures even single board computers like Raspberry PIs, Alphas, SPARC 64s, etc. Basically computers where 2GB of RAM is A LOT of RAM, and for the most part this basically works there is a flag to disable it on i486 and i586 systems but it mostly works with 2GB of RAM. There are no kernel modules any more in OpenBSD all are built into the kernel. Is there kernel that much smaller than the Linux kernel that it doesn't need 32GB for linking the kernel? Or does actual compilation use more RAM than linking does?
            The difference is most likely that OpenBSD devs actually know what they're doing.

            Comment


            • #36
              I have the impression that OpenBSD is far less likely to have drivers created for / accepted into it, especially in terms of "edge cases" with relatively low usage. If there is less to link there is less to optimise. Of course, their linker may also have similar optimisations to those added here.

              Comment


              • #37
                Originally posted by GreenReaper View Post
                I have the impression that OpenBSD is far less likely to have drivers created for / accepted into it, especially in terms of "edge cases" with relatively low usage. If there is less to link there is less to optimise. Of course, their linker may also have similar optimisations to those added here.
                I think you are right. The last time I did a make menuconfig on Gentoo Linux it had all kind of support for radios and wifi drivers that OpenBSD just doesn't support. The project doesn't take anything requiring a non disclosure agreement either so most modern Realtek wifi cards aren't supported for example neither is the novouru Nvidia driver. All the AI accelerator drivers that Linux supports are absent on OpenBSD too. I think the biggest drivers on OpenBSD are the AMDGPU and i915 Intel graphics drivers. Everything else is pretty small. There is remarkably not even a separate NVMe driver NVMe drives are supported as emulated SCSI drives and show up as sd drives not nvme like in FreeBSD and Linux so their NVMe driver must be some kind of shim driver or something that is why I've wondered if NVMe drives get their full performance on OpenBSD using basically emulated SCSI code for them.

                Don't believe me about the NVMe driver? Taken right from the source page of the driver: " Although the NVMe specification provides its own command set, the nvme driver provides access to the storage via a SCSI translation layer.​" https://man.openbsd.org/nvme

                Comment


                • #38
                  640K should be enough for anyone.

                  Comment


                  • #39
                    Originally posted by mulenmar View Post
                    640K should be enough for anyone.
                    The fun thing is we got to the far worse other extreme of nonsense beliefs: "everyone has at least 16GB and can buy more if we ask them to".

                    Comment


                    • #40
                      Originally posted by sinepgib View Post

                      Does it limit by CPU load? I thought it was always controlled only in terms of active jobs with -j. I still think limiting a group of processes by resources is cgroups job. But in any case, make is a more reasonable place than the compiler for that.



                      Interesting.​
                      its the -l parameter. (load) It wont spawn new jobs when load is reached. Setting load to half of your cores and than -j to cores is a pretty good strategy to limit load to half your cores because a job doesnt always max out a CPU all the time. a -mX or -rX (if it doesnt exist) parameter to not spawn new jobs when reached should do the trick just fine.
                      Last edited by cj.wijtmans; 09 March 2023, 06:44 PM.

                      Comment

                      Working...
                      X