Originally posted by sinepgib
View Post
Announcement
Collapse
No announcement yet.
Linux 6.3 Improvements Yield Better Chances Of Successfully Compiling The Kernel With ~32GB RAM
Collapse
X
-
Last edited by cj.wijtmans; 09 March 2023, 06:44 PM.
-
Originally posted by GreenReaper View PostI have the impression that OpenBSD is far less likely to have drivers created for / accepted into it, especially in terms of "edge cases" with relatively low usage. If there is less to link there is less to optimise. Of course, their linker may also have similar optimisations to those added here.
Don't believe me about the NVMe driver? Taken right from the source page of the driver: " Although the NVMe specification provides its own command set, the nvme driver provides access to the storage via a SCSI translation layer." https://man.openbsd.org/nvme
- Likes 1
Leave a comment:
-
I have the impression that OpenBSD is far less likely to have drivers created for / accepted into it, especially in terms of "edge cases" with relatively low usage. If there is less to link there is less to optimise. Of course, their linker may also have similar optimisations to those added here.
- Likes 1
Leave a comment:
-
Originally posted by kylew77 View PostI have a question that is maybe a bit off topic but in the OpenBSD world they relink the kernel at each boot across all architectures even single board computers like Raspberry PIs, Alphas, SPARC 64s, etc. Basically computers where 2GB of RAM is A LOT of RAM, and for the most part this basically works there is a flag to disable it on i486 and i586 systems but it mostly works with 2GB of RAM. There are no kernel modules any more in OpenBSD all are built into the kernel. Is there kernel that much smaller than the Linux kernel that it doesn't need 32GB for linking the kernel? Or does actual compilation use more RAM than linking does?
- Likes 2
Leave a comment:
-
Originally posted by sinepgib View PostI haven't thought of this, but I guess allyesconfig could disable many optimizations. AFAICT it's not supposed to run but to ensure everything builds only, so the only speed that matters is build speed.
- Likes 1
Leave a comment:
-
Originally posted by GreenReaper View Postand trying to optimise the result in expensive ways which require a lot of state.
Leave a comment:
-
Originally posted by clevelandrocks View PostI remember compiling everything from scratch back in the 2000s on machines with 512MB RAM and only 1-2 cores. Sure it might take a while (6-24hrs), but RAM wasn't a hard limit to compiling the kernel. What changed that there is now a RAM related failures?
Leave a comment:
-
Originally posted by cj.wijtmans View PostAlso make should support limiting build jobs by RAM usage, it supports limiting by cpu load so why not ram.
Originally posted by peppercats View Postmodprobed-db is preferable else you may miss modules not currently loaded
Leave a comment:
Leave a comment: