Announcement

Collapse
No announcement yet.

Building The Default x86_64 Linux Kernel In Just 16 Seconds

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • _Alex_
    replied
    Originally posted by cbdougla View Post
    Just out of curiosity, I decided to try compiling this kernel on a computer I have access to.
    I did it three times. Once on a raid 6 array, one on a fusion IO drive and one in /run (tmpfs)

    Here are the results

    linux-4.18 LTS compile

    Dell PowerEdge R810
    Xeon X7650 x 4
    64 GB RAM
    Funtoo Linux 1.3
    kernel 4.14.78-gentoo

    raid 6: time make -s -j 128
    real 1m19.368s
    user 49m0.871s
    sys 4m42.715s


    fusion io: time make -s -j 128
    real 1m18.847s
    user 49m46.197s
    sys 5m49.159s

    tmpfs: time make -s -j 128
    real 1m15.964s
    user 49m10.004s
    sys 4m55.751s


    So it seems for me at least, it doesn't make much of a difference all where the files are stored.
    Which in turn means that there is something else that acts as a scaling bottleneck*, other than I/O. Ram latency? Ram bandwidth? Build script issues? Single thread linking? Linux scheduler? GCC scheduling issues as the thread that does the scheduling is also working with compilation? Etc etc.

    * I'm referring to the fact that time is not improved linearly as we add extra threads, especially when we add A LOT of threads, like going from a 1 socket epyc to 2 socket: https://openbenchmarking.org/embed.p...ha=371b7fe&p=2

    From 20s to 16s, instead of being near 11-12s on the 7742.
    7601 scales better on two sockets, from 37s to 23s.

    Leave a comment:


  • Sonadow
    replied
    Originally posted by Michael View Post

    It's not a matter of "not heard of", but rather trying to be realistic - how many people actually build in tmpfs?
    With 128GB of memory in both my dual-Xeon 2690 v2 and my Threadripper 2990wx boxes, I'd be mad to not use tmpfs for building just about everything I need to build.

    Leave a comment:


  • programmerjake
    replied
    Originally posted by nuetzel View Post

    In which time...?
    15s of compile time on a pentium 4.

    tcc does a lot less optimizations than gcc or llvm.

    Leave a comment:


  • cbdougla
    replied
    Just out of curiosity, I decided to try compiling this kernel on a computer I have access to.
    I did it three times. Once on a raid 6 array, one on a fusion IO drive and one in /run (tmpfs)

    Here are the results

    linux-4.18 LTS compile

    Dell PowerEdge R810
    Xeon X7650 x 4
    64 GB RAM
    Funtoo Linux 1.3
    kernel 4.14.78-gentoo

    raid 6: time make -s -j 128
    real 1m19.368s
    user 49m0.871s
    sys 4m42.715s


    fusion io: time make -s -j 128
    real 1m18.847s
    user 49m46.197s
    sys 5m49.159s

    tmpfs: time make -s -j 128
    real 1m15.964s
    user 49m10.004s
    sys 4m55.751s


    So it seems for me at least, it doesn't make much of a difference all where the files are stored.

    Leave a comment:


  • willmore
    replied
    Originally posted by wizard69 View Post
    Wow!

    i can remember building the kernel for Redhat 4 or 5 on a laptop of the day. It was one of those things where you go to bed and hope it is finished before morning. Of course laptop of that time period really sucked but man this is a massive delta.
    I remember being blown away when I could build a kernel in 16 *minutes*

    I should add that I've worked on machines where it was more like 16 hours. 386sx/16 with 4MB of EMM on an ISA card. Laptop PATA IDE drives hanging off the ISA bus as well. It's all I had free to do MD driver testing on, so it's what I used. Thank goodness for scripts that can just go run on their own.

    Leave a comment:


  • gordanb
    replied
    Originally posted by nanonyme View Post

    Meh, that sounds nasty. People usually assume /var/tmp persists over reboot.
    Never found anything that causes a problem with /tmp and /var/tmp on tmpfs and I have been using that setup since tmpfs has existed.

    Leave a comment:


  • gordanb
    replied
    Originally posted by Michael View Post

    It's not a matter of "not heard of", but rather trying to be realistic - how many people actually build in tmpfs?
    I would bet good money that it's multiples more than build on Optane. I have been building on tmpfs for years and I maintain a Linux distro. Having /var/lib/mock on tmpfs is absolutely the way forward.

    Leave a comment:


  • wizard69
    replied
    Wow!

    i can remember building the kernel for Redhat 4 or 5 on a laptop of the day. It was one of those things where you go to bed and hope it is finished before morning. Of course laptop of that time period really sucked but man this is a massive delta.

    Leave a comment:


  • _Alex_
    replied
    I think there's something fishy in the way linux scaling works in some applications... I mean compiling and video encoding do not scale very well from 1 socket to 2 sockets - unlike crypto hashing or john the ripper cracking. These bottlenecks have to be addressed so that 2x sockets start to make sense and become viable. Perf/$ tops at 1 socket for apparently bottlenecked apps - that shouldn't really be bottlenecked because their loads can be parallelized.

    It could be I/O... but it could be ram operations that go slow (either due to latency or bandwidth), or it could be that one thread is used more for scheduling the load on other threads and while it is busy itself creates delays in assigning tasks to other threads (?). Or it could be something else...

    Leave a comment:


  • tchiwam
    replied
    Also of interest, 2nd make clean and make when all the sources are in the cache.

    Leave a comment:

Working...
X