Announcement

Collapse
No announcement yet.

Five Years Of Linux Kernel Benchmarks: 2.6.12 Through 2.6.37

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by kebabbert View Post
    And what will the result be, you suspect?

    Anyway, I think Linux should focus on bug fixes and slimming the kernel, every other release instead of adding new functionality all the time.
    I suspect results would vary. But even if they didn't, at least then we could really speak about real kernel performance...

    Comment


    • #72
      Originally posted by misiu_mp View Post
      But since the only variable is the guest kernel (I assume any background activity of the host was inactivated), there is still value in this comparison. The results are not directly comparable with other benchmarks, but should be quite telling about the differences between the tested kernels.
      After all the virtual machine is just that - a machine. It has certain performance characteristics, but so do all real machines - and they are all different.
      You have a point if it could be shown that the given VM configuration (including the host kernel, user space and the virtualization platform) has vastly non-linearly different characteristics from a real hardware machine. For example if assumptions about hardware that were valid when the kernels were written would not be valid on a VM. An example could be disk_performance << dram_performance << sram_performance << hardware_register_performance.
      Then again we have seen it happening before with real hardware (cpu cache, super-scalar, out-of-order architecture) and are seeing it today (SSD, multiprocessing). So testing on real hardware might not always be straight fair either.
      It is true, that the only variable is the guest kernel. That is why some of the benchmarks do show significant variations. Problem is, a virtual machine like KVM is really just another userspace application with beefy compute and ram requirements. What is managing the hardware capabilities of the system, is the host kernel.

      You should ask yourself the question: In what areas i expect the kernel to be improved to increase performance?

      What could the kernel possibly do to increase performance?

      You will find out that most of it relates to hardware management optimizations. There are other forms of improvements of course, but it mostly is about hardware, especially ram and disk. This is what reading the kernel changelogs will reveal to you, it mostly is about drivers and filesystems...

      This is not tested in a virtualized environment, at least not KVM or VirtualBox, You need a larger degree of hardware access for the guest to really affect performance.

      It is true that results may vary depending on the testing machine, but this is what the kernel is about, utilizing hardware. Apart from changing the allocation of threads and/or i/o, there is nothing for the kernel devs to do except improving drivers and filesystems...

      Comment


      • #73
        If a test use certain hardware, it only tests the drivers for that particular hardware. Unless you run on the same hardware, its not interesting for you. What is interesting in a kernel is the generic subsystems, process and memory management, which is not that dependent on the particular devices in your system and will give improvements in all systems (given similar architecture).
        KVM also emulates certain hardware (or rather qemu does it) and drivers to that hardware are tested apart from the generic subsystems.
        As I said, unless the host makes the virtualized machine behave in unexpected ways (e.g. when simulating ram through swap, ignoring fsyncs, making some operations much slower/faster than others), the results are valid. I would like to see some concrete examples of how the KVM differs in behaviour from a real hardware configuration.

        In many of the compute-intensive tests I have noticed a slight trend of increasing performance - something I would expect from incremental optimizations of the crucial generic kernel subsystems (such as memory management, task switching).

        It is true that results may vary depending on the testing machine, but this is what the kernel is about, utilizing hardware. Apart from changing the allocation of threads and/or i/o, there is nothing for the kernel devs to do except improving drivers and filesystems...
        They vary alot! To the point I would consider tests done on a really old hardware quite as worthless as you consider those KVM tests.

        Comment


        • #74
          Originally posted by misiu_mp View Post
          If a test use certain hardware, it only tests the drivers for that particular hardware. Unless you run on the same hardware, its not interesting for you. What is interesting in a kernel is the generic subsystems, process and memory management, which is not that dependent on the particular devices in your system and will give improvements in all systems (given similar architecture).
          KVM also emulates certain hardware (or rather qemu does it) and drivers to that hardware are tested apart from the generic subsystems.
          As I said, unless the host makes the virtualized machine behave in unexpected ways (e.g. when simulating ram through swap, ignoring fsyncs, making some operations much slower/faster than others), the results are valid. I would like to see some concrete examples of how the KVM differs in behaviour from a real hardware configuration.

          In many of the compute-intensive tests I have noticed a slight trend of increasing performance - something I would expect from incremental optimizations of the crucial generic kernel subsystems (such as memory management, task switching).


          They vary alot! To the point I would consider tests done on a really old hardware quite as worthless as you consider those KVM tests.
          Nope. You are wrong.

          First of all, Michael said he used the same configuration on all kernels, meaning he didn't enable new features that required it explicitely. He also benchmarked ext3 performance only. You do realize this makes a world of difference right?

          If a program adds a feature in a new version to improve performance, and you don't enable it, then how could you benchmark this improvement?

          Each kernel has different configuration requirements. It really is much more complicated than you think.

          Plus,even though guest kernel plays a role, it is the host kernel that is managing the hardware, ram, threads, and I/O.

          Comment


          • #75
            Originally posted by TemplarGR View Post
            Nope. You are wrong.

            First of all, Michael said he used the same configuration on all kernels, meaning he didn't enable new features that required it explicitely. He also benchmarked ext3 performance only. You do realize this makes a world of difference right?

            If a program adds a feature in a new version to improve performance, and you don't enable it, then how could you benchmark this improvement?

            Each kernel has different configuration requirements. It really is much more complicated than you think.
            I don't know which part you are referring to as being wrong, but I agree that not enabling optimizing features, that are normally used with certain versions of kernels is wrong for benchmarking general performance. That part has nothing to do with the type of machine it is being tested on though.

            Originally posted by TemplarGR View Post
            Plus,even though guest kernel plays a role, it is the host kernel that is managing the hardware, ram, threads, and I/O.
            Does it really? You say that if the os inside the KVM creates a thread, this thread is actually created in the host and managed by it? Or when a guest process requests memory, it is assigned to the process by the host? That would require that the host knows about the guest's processes.
            I assumed the guest is *one* simple process with a ton of memory statically assigned to it, and the host doesn't care much about what the guest is doing with it. If the host is otherwise idle, there is little to affect the guest's memory or cpu speed across the different tests.
            The newest hardware allows the guest even to directly handle page faults.

            Comment

            Working...
            X