Announcement

Collapse
No announcement yet.

Linux Kernel At 19.5 Million Lines Of Code, Continues Rising

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Kernel At 19.5 Million Lines Of Code, Continues Rising

    Phoronix: Linux Kernel At 19.5 Million Lines Of Code, Continues Rising

    With Linux 4.1 having been released this week and being mid-way through 2015, here's some Git development statistics for the newest kernel code...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Am I the only one that thinks more lines of code, after a certain point, is a bad thing? I am wondering how many of those lines are actually Kernel lines.

    Comment


    • #3
      Originally posted by bpetty View Post
      Am I the only one that thinks more lines of code, after a certain point, is a bad thing? I am wondering how many of those lines are actually Kernel lines.
      Nope, your not the only one. I don't mind so much if the code improves hardware support or capability. But, there is a ton of code in the kernel that doesn't And a lot of code in there that should be in user space. (not microkernel, just stuff that shouldn't be in the kernel but is.)
      Last edited by duby229; 23 June 2015, 11:02 AM.

      Comment


      • #4
        The vast majority of the code is hardware drivers that are only loaded when the appropriate hardware is present. That's not a bad thing.

        Comment


        • #5
          additionally remember the linux tree contains all supported architectures too(like PPc, MIPS, unicore, ARM, etc) with their respective hardware drivers, add all filesystems too, add debug infrastructure code for kernel developers, add SIMD architecture specific variants for compression and encryption.

          The point is the LoC count inside the kernel is an absolutely useless measure for most cases since the kernel is modularized at compile time not at code, AKA the compilation process remove all unnesesary code and make modules for the use on demand code(drivers, non root filesystems, etc.), so in the end the kernel reduce itself to the exact minimum needed by your hardware at runtime, hence all that LOL LoC crazyness get reduced to "you wasted 5m of your life worrying about absolutely nothing important because you didn't wanna waste 1m reading actual info instead of FUDing"

          Comment


          • #6
            Is it so difficult to implement PHYSX code into kernel!? Or to make a specific driver so to avoid funs of the card run all the time!?
            Last edited by Azrael5; 23 June 2015, 01:13 PM.

            Comment


            • #7
              Originally posted by Azrael5 View Post
              Is it so difficult to implement PHYSX code into kernel!? Or to make a specific driver so to avoid funs of the card run all the time!?
              What does this have to do with the Linux kernel? The term '(compute) kernel' as used in the context of DSP/OpenCL/CUDA/PhysX/whatever has little to do with OS kernels.

              @jrchk2k8, I think the worries are less about performance and more about maintainability, those almost 20M lines are not going to refactor and audit themselves ...

              Comment


              • #8
                Originally posted by CrystalGamma View Post

                What does this have to do with the Linux kernel? The term '(compute) kernel' as used in the context of DSP/OpenCL/CUDA/PhysX/whatever has little to do with OS kernels.

                @jrchk2k8, I think the worries are less about performance and more about maintainability, those almost 20M lines are not going to refactor and audit themselves ...

                They are refactored all the time, that is why some people cry like babies about ABI issues.

                Comment


                • #9
                  Originally posted by macemoneta View Post
                  The vast majority of the code is hardware drivers that are only loaded when the appropriate hardware is present. That's not a bad thing.
                  Yes, if the drivers are compiled as modules they get loaded when needed for a device otherwise they don't get loaded thus not adding to overhead. However if drivers get compiled into the kernel they are loaded always whether or not the hardware is actually in the system and thus become part of the kernel proper.

                  Comment


                  • #10
                    Originally posted by DeepDayze View Post

                    Yes, if the drivers are compiled as modules they get loaded when needed for a device otherwise they don't get loaded thus not adding to overhead. However if drivers get compiled into the kernel they are loaded always whether or not the hardware is actually in the system and thus become part of the kernel proper.
                    Anything I expect might get booted to, like sata, pata, scsi, I always build into the kernel. When building live media there are usually a few dozen drivers built in. Do you think the overhead is negligible? Would it be worth it to try to build them as modules and use a initramfs to load them from instead?

                    Comment

                    Working...
                    X