Announcement

Collapse
No announcement yet.

Five Years Of Linux Kernel Benchmarks: 2.6.12 Through 2.6.37

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    All I see is abstraction. The Linux kernel has many, many appliances. But I have no idea what I'm talking about...

    I don't care if you test it on Windows or Plan9. You have the base performance of a vitual machine implementation where there is no chance of direct HW tricks. Why is that not perfect? But no... let's check for HW specific optimisation drivers that can peak performance in area's that have nothing to do with the kernel itself but with a specific implementation that's not representative of all tge hardware that Linux can run with...

    Don't pay attention to the retard that questions the established theory...

    Comment


    • #62
      Originally posted by V!NCENT View Post
      I don't care if you test it on Windows or Plan9. You have the base performance of a vitual machine implementation where there is no chance of direct HW tricks. Why is that not perfect? But no... let's check for HW specific optimisation drivers that can peak performance in area's that have nothing to do with the kernel itself but with a specific implementation that's not representative of all tge hardware that Linux can run with...
      This is incorrect. Optimizations to the drivers used in the VM itself can still affect the performance of the virtual machine being tested. Optimizations made to the drivers used in the VM are no different than optimizations made in the drivers for physical hardware.

      The problem with testing on a virtual machine is that you now have to worry about bottlenecks in the host, activity on the host machine affecting the performance on the guest VM, and quirks of the VM software (such as VirtualBox not respecting fsync() as we learned in several Phoronix filesystem articles).

      That being said, it's still nice to see these benchmarks. They may be affected by the host VM software, but they have taught me that 2.6.29 was very likely a crappy kernel release.

      Comment


      • #63
        Originally posted by TemplarGR View Post
        A large part of kernel functionality is accessed using userspace libraries. If those libraries are old, some features cannot be used or you lose some improvements of later versions.
        I'm aware of that. That's true for X.org drivers, syscall support in glibc and other things.

        But let's face the truth - you can not compare apples to oranges. If you want to compare kernel speed, you need to change only kernel. That was true 20 years ago, 10 years ago, when I played this stuff and it's still true now.

        What are you talking about is speed of the whole OS. And you are right - for such benchmark all things should be updated for appropriate version numbers. But AFAIU this benchmark intend to measure only kernel speed.

        Comment


        • #64
          Originally posted by V!NCENT View Post
          All I see is abstraction. The Linux kernel has many, many appliances. But I have no idea what I'm talking about...

          But you can still learn something

          In fact, I always learn something new every day

          Comment


          • #65
            Originally posted by michal View Post
            I'm aware of that. That's true for X.org drivers, syscall support in glibc and other things.

            But let's face the truth - you can not compare apples to oranges. If you want to compare kernel speed, you need to change only kernel. That was true 20 years ago, 10 years ago, when I played this stuff and it's still true now.

            What are you talking about is speed of the whole OS. And you are right - for such benchmark all things should be updated for appropriate version numbers. But AFAIU this benchmark intend to measure only kernel speed.
            If you really want to test only kernel speed, then you shouldn't use a regular GUI distribution anyway. You would need another testing environment. Stripped of almost all non-essential stuff.

            Comment


            • #66
              Originally posted by V!NCENT View Post
              All I see is abstraction. The Linux kernel has many, many appliances. But I have no idea what I'm talking about...

              I don't care if you test it on Windows or Plan9. You have the base performance of a vitual machine implementation where there is no chance of direct HW tricks. Why is that not perfect? But no... let's check for HW specific optimisation drivers that can peak performance in area's that have nothing to do with the kernel itself but with a specific implementation that's not representative of all tge hardware that Linux can run with...

              Don't pay attention to the retard that questions the established theory...
              You still do not know what you are talking about...

              It is true that some of the kernel progress can be shown on a virtualized environment too. But that doesn't mean that you can use these results as an indication of kernel progress in general.

              I don't know if you are an IT pro or student, but i find it hard to believe that you are and can't even understand what the kernel is all about:managing hardware. That's it. Just managing hardware, drivers, and provide priorities. Almost nothing else.

              So you are telling me, that when you want to benchmark a kernel whose primary role is to manage real hardware, you prefer a virtual machine over real hardware... Nice...

              You also can't understand, that what you really are benchmarking is kernel's 2.6.35 performance using KVM. This kernel handles the Core i7 and the ram and the disk and the threads and everything. You also benchmark KVM.

              That is why most of the graphs are in a straigh line for almost 5 years... There are exceptions, but this is the rule on this test. The real reason is that whatever the guest os is, threads will run based on the host's capabilities, not guest's.

              Comment


              • #67
                -1 vote

                1. You can't do benchmarks on virtual machine, especially benchmarks of kernel.
                2. Comparison of CPU bounded applications like bzip2 or encryption is totally wrong and misleading as kernel is almost not used for such computation. Performance of such applications does not depend on version of kernel or even OS, it rather depends on CPU and compilation flags.

                Sorry but this comparison is mostly useless.

                Comment


                • #68
                  Originally posted by TemplarGR View Post
                  If you really want to test only kernel speed, then you shouldn't use a regular GUI distribution anyway. You would need another testing environment. Stripped of almost all non-essential stuff.
                  It's not realy necessary. There is one simple trick, that you can use (and it is commonly used) - you boot to /bin/sh. Just give init=/bin/sh as a kernel parameter - and you get realy clean testing environment

                  Comment


                  • #69
                    Originally posted by TemplarGR View Post
                    I propose adding a page on this article, testing a native 2005 era machine using Fedora Core 4 with its stock kernel, and later compiling a recent kernel. I believe it will prove my point...
                    And what will the result be, you suspect?

                    Anyway, I think Linux should focus on bug fixes and slimming the kernel, every other release instead of adding new functionality all the time.

                    Comment


                    • #70
                      Originally posted by michal View Post
                      In your test you are benchmarking:
                      - hardware
                      - host kernel space
                      - host user space
                      - virtualization platform
                      - guest kernel space
                      - guest user space

                      I don't give a flying ship about results. You can do what you want in your free time. Just don't call these benchmarks meaningful
                      But since the only variable is the guest kernel (I assume any background activity of the host was inactivated), there is still value in this comparison. The results are not directly comparable with other benchmarks, but should be quite telling about the differences between the tested kernels.
                      After all the virtual machine is just that - a machine. It has certain performance characteristics, but so do all real machines - and they are all different.
                      You have a point if it could be shown that the given VM configuration (including the host kernel, user space and the virtualization platform) has vastly non-linearly different characteristics from a real hardware machine. For example if assumptions about hardware that were valid when the kernels were written would not be valid on a VM. An example could be disk_performance << dram_performance << sram_performance << hardware_register_performance.
                      Then again we have seen it happening before with real hardware (cpu cache, super-scalar, out-of-order architecture) and are seeing it today (SSD, multiprocessing). So testing on real hardware might not always be straight fair either.

                      Comment

                      Working...
                      X