Announcement

Collapse
No announcement yet.

Building A Full Linux Debug Kernel Optimized From 53GB To 25GB Heap Use

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Building A Full Linux Debug Kernel Optimized From 53GB To 25GB Heap Use

    Phoronix: Building A Full Linux Debug Kernel Optimized From 53GB To 25GB Heap Use

    Processing the vmlinux.o object with objtool has been the most memory intensive step of the Linux kernel build process. Prior patches have already worked to reduce this objtool memory use while compiling the Linux kernel and a big patch series now set for Linux 6.5 is set to sharply reduce the maximum heap use...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Tbh 52GB for this kind of processing doesn't even seem excessive to me. At the same time, I understand that since this isn't the only thing running on that machine, it meant 64GB RAM couldn't really cut it. And now it can. So it's a nice plus.

    Comment


    • #3
      Does this mean, if I run an application in debug mode, it starts and crashes faster?
      Hi

      Comment


      • #4
        Originally posted by stiiixy View Post
        Does this mean, if I run an application in debug mode, it starts and crashes faster?
        You mean the improvement?

        The improvement is about handling object files during the build of an application. Only objtool got faster and needs less memory handling debug info. So no, the built application should have the same or very similar characteristics.

        Comment


        • #5
          Michael

          Typo

          "benefits too form these objtool optimizations" (should be from not form)

          Comment


          • #6
            Amazing, let's hope that won't be lost in translation like the fast kernel header patch was.

            Comment


            • #7
              Originally posted by bug77 View Post
              Tbh 52GB for this kind of processing doesn't even seem excessive to me.
              Obviously it is if it can be cut in half that easily.

              Comment


              • #8
                Originally posted by bachchain View Post
                Obviously it is if it can be cut in half that easily.
                Yes, compiling C is still a horrible waste of CPU cycles. It used to be a piece of shit language for compilation already when Pascal was more widely in use. C's use of include headers instead of proper modules makes it really sucky. Note that most of the issues related to compiling C are caused by crappy grammar and slow front-end processing. The core of the language isn't that bad. It would just need a new syntax and module system. Also combining make & gcc would lead to further speedups. Macro processing would also benefit from a more efficient meta-language for conditional compilation etc.

                It's basically the same issue when using Bash for general purpose computing. The builtins and grammar are so horrible that Python beats it hands down even though it's not even using a JIT.

                Comment


                • #9
                  Originally posted by bug77 View Post
                  Tbh 52GB for this kind of processing doesn't even seem excessive to me. At the same time, I understand that since this isn't the only thing running on that machine, it meant 64GB RAM couldn't really cut it. And now it can. So it's a nice plus.
                  You don't really need that much RAM, you can use a swap. It's going to slow things down since this is sort of pure IO (random reads and writes). It's not a memory leak so that's actually sort of why swaps exist.

                  Comment

                  Working...
                  X