Announcement

Collapse
No announcement yet.

Building The Default x86_64 Linux Kernel In Just 16 Seconds

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by nils_ View Post
    I wonder though if most files aren't cached already anyways - the impact may not be as noticeable as some think.
    The difference might be noticeable if you have spinning HDD + don't use noatime/relatime. I tried flushing the caches, then compiling three times on SATA SSD, NVMe SSD, and tmpfs. x86 defconfig & tmpfs was the fastest, 2m0,052s on avg. SATA and NVMe SSD were almost as fast here @ 2m0,787s and 2m0,643s on average. Still, I'd expect the I/O perf to have a more significant effect when the compilation is 8 times faster.

    Comment


    • #32
      Originally posted by Michael View Post

      It's not a matter of "not heard of", but rather trying to be realistic - how many people actually build in tmpfs?
      My build sub dirs are always tmpfs since 2004...

      Comment


      • #33
        In the single core cpu hdd era it was common to compile with -j2 to minimize waiting for IO.

        Maybe a fistfull of extra jobs could help?

        Comment


        • #34
          Also of interest, 2nd make clean and make when all the sources are in the cache.

          Comment


          • #35
            I think there's something fishy in the way linux scaling works in some applications... I mean compiling and video encoding do not scale very well from 1 socket to 2 sockets - unlike crypto hashing or john the ripper cracking. These bottlenecks have to be addressed so that 2x sockets start to make sense and become viable. Perf/$ tops at 1 socket for apparently bottlenecked apps - that shouldn't really be bottlenecked because their loads can be parallelized.

            It could be I/O... but it could be ram operations that go slow (either due to latency or bandwidth), or it could be that one thread is used more for scheduling the load on other threads and while it is busy itself creates delays in assigning tasks to other threads (?). Or it could be something else...

            Comment


            • #36
              Wow!

              i can remember building the kernel for Redhat 4 or 5 on a laptop of the day. It was one of those things where you go to bed and hope it is finished before morning. Of course laptop of that time period really sucked but man this is a massive delta.

              Comment


              • #37
                Originally posted by Michael View Post

                It's not a matter of "not heard of", but rather trying to be realistic - how many people actually build in tmpfs?
                I would bet good money that it's multiples more than build on Optane. I have been building on tmpfs for years and I maintain a Linux distro. Having /var/lib/mock on tmpfs is absolutely the way forward.

                Comment


                • #38
                  Originally posted by nanonyme View Post

                  Meh, that sounds nasty. People usually assume /var/tmp persists over reboot.
                  Never found anything that causes a problem with /tmp and /var/tmp on tmpfs and I have been using that setup since tmpfs has existed.

                  Comment


                  • #39
                    Originally posted by wizard69 View Post
                    Wow!

                    i can remember building the kernel for Redhat 4 or 5 on a laptop of the day. It was one of those things where you go to bed and hope it is finished before morning. Of course laptop of that time period really sucked but man this is a massive delta.
                    I remember being blown away when I could build a kernel in 16 *minutes*

                    I should add that I've worked on machines where it was more like 16 hours. 386sx/16 with 4MB of EMM on an ISA card. Laptop PATA IDE drives hanging off the ISA bus as well. It's all I had free to do MD driver testing on, so it's what I used. Thank goodness for scripts that can just go run on their own.

                    Comment


                    • #40
                      Just out of curiosity, I decided to try compiling this kernel on a computer I have access to.
                      I did it three times. Once on a raid 6 array, one on a fusion IO drive and one in /run (tmpfs)

                      Here are the results

                      linux-4.18 LTS compile

                      Dell PowerEdge R810
                      Xeon X7650 x 4
                      64 GB RAM
                      Funtoo Linux 1.3
                      kernel 4.14.78-gentoo

                      raid 6: time make -s -j 128
                      real 1m19.368s
                      user 49m0.871s
                      sys 4m42.715s


                      fusion io: time make -s -j 128
                      real 1m18.847s
                      user 49m46.197s
                      sys 5m49.159s

                      tmpfs: time make -s -j 128
                      real 1m15.964s
                      user 49m10.004s
                      sys 4m55.751s


                      So it seems for me at least, it doesn't make much of a difference all where the files are stored.

                      Comment

                      Working...
                      X