Announcement

Collapse
No announcement yet.

Building The Linux Kernel With LLVM's Clang Yields Comparable Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by discordian View Post
    I think if you`d see significant differences then it would mean something is really broken.
    The time spend in code for kernel switching processes is minimal (the switching itself is more expensive), its when, how often and the strategy involved that gives you measurable run time differences.
    Either you are extremely dumb, or you just know nil about the kernel.

    Among other very CPU intensive things that the kernel does, are 1) Encryption (even AES can be stressful on CPUs which don't the circuitry/acceleration it, e.g. Intel Pentiums or Celerons) 2) Various sorting methods (can be very stressful when there are thousands of objects) 3) filtering 4) searching and many others.

    Michael, does not test any of these things. For single threaded user space applications the kernel usually has zero impact.

    Comment


    • #12
      Originally posted by birdie View Post
      Either you are extremely dumb, or you just know nil about the kernel.

      Among other very CPU intensive things that the kernel does, are 1) Encryption (even AES can be stressful on CPUs which don't the circuitry/acceleration it, e.g. Intel Pentiums or Celerons) 2) Various sorting methods (can be very stressful when there are thousands of objects) 3) filtering 4) searching and many others.
      Encryption - would be either inline asm for hardware support or some tight loops
      Sorting - Primary concern is the algorithm
      Filtering - you mean network stack?
      Searching - same as sorting.

      I really doubt there would be any significant differences, at any rate more than within normal user-space apps, but feel free to prove me wrong.
      Originally posted by birdie View Post
      Michael, does not test any of these things. For single threaded user space applications the kernel usually has zero impact.
      And barely more for threaded ones, like the example with x264 you suggested - the overhead thats spent in the kernel for choosing the next process is small.

      If you`d really want to stress test youd have to set the scheduler intervall to ridiculous low values (and then the values would still be questionable).

      I agree that the benchmark in the article is rather useless, but I dont expect any significant differences for the "kernel stuff" that the kernel is doing, for everything else you can do easier comparisons if you single it out .eg just run the test using the encryption code.

      Comment


      • #13
        Originally posted by chithanh View Post
        Interesting benchmarks, but comparing not-yet released LLVM/Clang 3.5 to a GCC 4.8 which is already one release behind seems not very fair.

        Comparing Clang 3.4.1 to GCC 4.9.0 would have been at least fair, or Clang 3.5 to GCC 4.10.
        Clang 3.5 had to be used due to how the LLVMLinux build scripts are setup.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #14
          Originally posted by Vim_User View Post
          I miss numbers about the time it needs to compile a kernel with LLVM compared to GCC.
          Unfortunately it's not straightforward comparing the build times until all the patches are mainlined and don't need to rely upon the LLVMLinux build scripts that also build their own Clang, etc.
          Michael Larabel
          https://www.michaellarabel.com/

          Comment


          • #15
            Michael, where is evidence that first kernel is compiled with gcc-4.8 i don't see it ? In logs i see first kernel you use is one ubuntu mainline kernel http://kernel.ubuntu.com/~kernel-ppa.../v3.14-trusty/ and those are currently still builded against precise's toolchain which is gcc 4.6 .

            Comment


            • #16
              What can actually be gathered from these benchmarks, is that since the variance between the outputs was well within the expected variances that would be observed between test runs, that Clang did not cause any significant breakage among kernel functions that could cause performance degredation, nor did it cause any significant FIXES in code that is performance degraded significantly by defects in GCC.

              And that, of course, means that the test was successful, under the given workloads.


              I.e., the result of this test can be either "success" or "WHAT THE EFF IS GOING WRONG HERE???!?!"

              Comment


              • #17
                Choosing the version of clang to use with LLVMLinux

                Originally posted by Michael View Post
                Unfortunately it's not straightforward comparing the build times until all the patches are mainlined and don't need to rely upon the LLVMLinux build scripts that also build their own Clang, etc.
                Had you set CLANG_TOOLCHAIN=prebuilt it would have downloaded and used the latest released version of clang (v3.4)

                make -C targets/x86_64 CLANG_TOOLCHAIN=prebuilt kernel-build

                If you set CLANG_TOOLCHAIN=native it will use the native clang compiler you have in your path.

                By default the build system uses the latest clang because that is what we do our major testing against, but you have the option of using any version of clang that you wish.

                "make help" explains a lot.

                Feel free to ask us for help if something isn't working right, or doesn't make sense. We're happy to help or fix issues that you find.

                It does seem that the wiki instructions have fallen behind. We will fix that.

                Thank you for running these benchmarks.

                Behan

                Comment


                • #18
                  Originally posted by Michael View Post
                  Unfortunately it's not straightforward comparing the build times until all the patches are mainlined and don't need to rely upon the LLVMLinux build scripts that also build their own Clang, etc.
                  The LLVMLinux build system actually times both clang and gcc builds of the same kernel source (amongst other things). Just be sure to unset USE_CCACHE to make it fair (we use ccache to make the gcc builds faster).

                  Run "make kernel-clean kernel-build" and look at the build time at the end of the build.

                  Then run "unset USE_CCACHE; make kernel-gcc-clean kernel-gcc-build" and look at the build time at the end of that build.

                  Only the actual compile of the kernel is timed.

                  On my i7-3840QM compiling the vexpress target to SSD I get:

                  $ make list-versions | egrep 'GCC|LLVM|CLANG'
                  GCC = gcc-4.8.real (Ubuntu 4.8.2-19ubuntu1) 4.8.2
                  LLVM = LLVM version 3.5.0svn r209864 commit
                  CLANG = clang version 3.5.0 r209859 commit
                  $ make kernel-clean kernel-build
                  ...
                  User time (seconds): 768.03
                  System time (seconds): 70.10
                  Percent of CPU this job got: 598%
                  Elapsed (wall clock) time (h:mm:ss or m:ss): 2:19.97
                  Maximum resident set size (kbytes): 117260
                  ...
                  $ unset USE_CCACHE; make kernel-gcc-clean kernel-gcc-build
                  ...
                  User time (seconds): 905.61
                  System time (seconds): 69.82
                  Percent of CPU this job got: 630%
                  Elapsed (wall clock) time (h:mm:ss or m:ss): 2:34.78
                  Maximum resident set size (kbytes): 136424
                  ...

                  I imagine you will opt to do multiple builds of the longer x86_64 build since it will give you a better test set.

                  The build system is there to do these precise things as well as allow us to easily have people replicate the same builds (barring the patch issues you described which we are investigating).

                  Your readers may also like to see the output of "make list-versions list-settings" which shows the versions of all the important SW involved and the settings used by the build system (git repos, branches, commit numbers, etc).

                  I hope this helps.

                  It is in my best interests to have 100% fair comparisons of the 2 toolchains through 3rd party benchmarks such as yours, so I am extremely happy to help in any way I can with your future benchmarks if you are amenable.

                  Behan

                  Comment


                  • #19
                    Originally posted by Michael View Post
                    Clang 3.5 had to be used due to how the LLVMLinux build scripts are setup.
                    The LLVMLinux build system allows you to set which clang you want to use. By default it uses the latest one from SVN, for testing reasons, but you can chose not to.

                    $ make help | grep CLANG
                    You can choose your clang by setting the CLANG_TOOLCHAIN variable.
                    CLANG_TOOLCHAIN=prebuilt Download and use llvm.org clang
                    CLANG_TOOLCHAIN=native Use distro installed clang
                    CLANG_TOOLCHAIN=from-source Download and build from source (Default)
                    CLANG_TOOLCHAIN=from-known-good-source

                    If you set CLANG_TOOLCHAIN=prebuilt it will download and use the latest released v3.4 version of clang. CLANG_TOOLCHAIN=native will allow you to use the clang in your $PATH.

                    make CLANG_TOOLCHAIN=prebuilt kernel-clean kernel-build

                    Please feel free to ask me about these sorts of things. The build system is not supposed to get in your way. If it does, that's a bug.

                    Thank you for your benchmarking work (this one in particular).

                    Behan

                    Comment


                    • #20
                      Originally posted by droidhacker View Post
                      What can actually be gathered from these benchmarks, is that since the variance between the outputs was well within the expected variances that would be observed between test runs, that Clang did not cause any significant breakage among kernel functions that could cause performance degredation, nor did it cause any significant FIXES in code that is performance degraded significantly by defects in GCC.

                      And that, of course, means that the test was successful, under the given workloads.


                      I.e., the result of this test can be either "success" or "WHAT THE EFF IS GOING WRONG HERE???!?!"
                      EXACTLY!
                      At least there is one commenter on this board who's not a complete idiot.
                      Hint, boys --- not EVERYTHING is dick-measuring contest. Not even everything comparing GCC to LLVM.

                      Comment

                      Working...
                      X