Announcement

Collapse
No announcement yet.

Five Years Of Linux Kernel Benchmarks: 2.6.12 Through 2.6.37

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    All I see is abstraction. The Linux kernel has many, many appliances. But I have no idea what I'm talking about...

    I don't care if you test it on Windows or Plan9. You have the base performance of a vitual machine implementation where there is no chance of direct HW tricks. Why is that not perfect? But no... let's check for HW specific optimisation drivers that can peak performance in area's that have nothing to do with the kernel itself but with a specific implementation that's not representative of all tge hardware that Linux can run with...

    Don't pay attention to the retard that questions the established theory...

    Comment


    • #62
      Originally posted by V!NCENT View Post
      I don't care if you test it on Windows or Plan9. You have the base performance of a vitual machine implementation where there is no chance of direct HW tricks. Why is that not perfect? But no... let's check for HW specific optimisation drivers that can peak performance in area's that have nothing to do with the kernel itself but with a specific implementation that's not representative of all tge hardware that Linux can run with...
      This is incorrect. Optimizations to the drivers used in the VM itself can still affect the performance of the virtual machine being tested. Optimizations made to the drivers used in the VM are no different than optimizations made in the drivers for physical hardware.

      The problem with testing on a virtual machine is that you now have to worry about bottlenecks in the host, activity on the host machine affecting the performance on the guest VM, and quirks of the VM software (such as VirtualBox not respecting fsync() as we learned in several Phoronix filesystem articles).

      That being said, it's still nice to see these benchmarks. They may be affected by the host VM software, but they have taught me that 2.6.29 was very likely a crappy kernel release.

      Comment


      • #63
        Originally posted by TemplarGR View Post
        A large part of kernel functionality is accessed using userspace libraries. If those libraries are old, some features cannot be used or you lose some improvements of later versions.
        I'm aware of that. That's true for X.org drivers, syscall support in glibc and other things.

        But let's face the truth - you can not compare apples to oranges. If you want to compare kernel speed, you need to change only kernel. That was true 20 years ago, 10 years ago, when I played this stuff and it's still true now.

        What are you talking about is speed of the whole OS. And you are right - for such benchmark all things should be updated for appropriate version numbers. But AFAIU this benchmark intend to measure only kernel speed.

        Comment


        • #64
          Originally posted by V!NCENT View Post
          All I see is abstraction. The Linux kernel has many, many appliances. But I have no idea what I'm talking about...

          But you can still learn something

          In fact, I always learn something new every day

          Comment


          • #65
            Originally posted by michal View Post
            I'm aware of that. That's true for X.org drivers, syscall support in glibc and other things.

            But let's face the truth - you can not compare apples to oranges. If you want to compare kernel speed, you need to change only kernel. That was true 20 years ago, 10 years ago, when I played this stuff and it's still true now.

            What are you talking about is speed of the whole OS. And you are right - for such benchmark all things should be updated for appropriate version numbers. But AFAIU this benchmark intend to measure only kernel speed.
            If you really want to test only kernel speed, then you shouldn't use a regular GUI distribution anyway. You would need another testing environment. Stripped of almost all non-essential stuff.

            Comment


            • #66
              Originally posted by V!NCENT View Post
              All I see is abstraction. The Linux kernel has many, many appliances. But I have no idea what I'm talking about...

              I don't care if you test it on Windows or Plan9. You have the base performance of a vitual machine implementation where there is no chance of direct HW tricks. Why is that not perfect? But no... let's check for HW specific optimisation drivers that can peak performance in area's that have nothing to do with the kernel itself but with a specific implementation that's not representative of all tge hardware that Linux can run with...

              Don't pay attention to the retard that questions the established theory...
              You still do not know what you are talking about...

              It is true that some of the kernel progress can be shown on a virtualized environment too. But that doesn't mean that you can use these results as an indication of kernel progress in general.

              I don't know if you are an IT pro or student, but i find it hard to believe that you are and can't even understand what the kernel is all about:managing hardware. That's it. Just managing hardware, drivers, and provide priorities. Almost nothing else.

              So you are telling me, that when you want to benchmark a kernel whose primary role is to manage real hardware, you prefer a virtual machine over real hardware... Nice...

              You also can't understand, that what you really are benchmarking is kernel's 2.6.35 performance using KVM. This kernel handles the Core i7 and the ram and the disk and the threads and everything. You also benchmark KVM.

              That is why most of the graphs are in a straigh line for almost 5 years... There are exceptions, but this is the rule on this test. The real reason is that whatever the guest os is, threads will run based on the host's capabilities, not guest's.

              Comment


              • #67
                -1 vote

                1. You can't do benchmarks on virtual machine, especially benchmarks of kernel.
                2. Comparison of CPU bounded applications like bzip2 or encryption is totally wrong and misleading as kernel is almost not used for such computation. Performance of such applications does not depend on version of kernel or even OS, it rather depends on CPU and compilation flags.

                Sorry but this comparison is mostly useless.

                Comment


                • #68
                  Originally posted by TemplarGR View Post
                  If you really want to test only kernel speed, then you shouldn't use a regular GUI distribution anyway. You would need another testing environment. Stripped of almost all non-essential stuff.
                  It's not realy necessary. There is one simple trick, that you can use (and it is commonly used) - you boot to /bin/sh. Just give init=/bin/sh as a kernel parameter - and you get realy clean testing environment

                  Comment


                  • #69
                    Originally posted by TemplarGR View Post
                    I propose adding a page on this article, testing a native 2005 era machine using Fedora Core 4 with its stock kernel, and later compiling a recent kernel. I believe it will prove my point...
                    And what will the result be, you suspect?

                    Anyway, I think Linux should focus on bug fixes and slimming the kernel, every other release instead of adding new functionality all the time.

                    Comment


                    • #70
                      Originally posted by michal View Post
                      In your test you are benchmarking:
                      - hardware
                      - host kernel space
                      - host user space
                      - virtualization platform
                      - guest kernel space
                      - guest user space

                      I don't give a flying ship about results. You can do what you want in your free time. Just don't call these benchmarks meaningful
                      But since the only variable is the guest kernel (I assume any background activity of the host was inactivated), there is still value in this comparison. The results are not directly comparable with other benchmarks, but should be quite telling about the differences between the tested kernels.
                      After all the virtual machine is just that - a machine. It has certain performance characteristics, but so do all real machines - and they are all different.
                      You have a point if it could be shown that the given VM configuration (including the host kernel, user space and the virtualization platform) has vastly non-linearly different characteristics from a real hardware machine. For example if assumptions about hardware that were valid when the kernels were written would not be valid on a VM. An example could be disk_performance << dram_performance << sram_performance << hardware_register_performance.
                      Then again we have seen it happening before with real hardware (cpu cache, super-scalar, out-of-order architecture) and are seeing it today (SSD, multiprocessing). So testing on real hardware might not always be straight fair either.

                      Comment


                      • #71
                        Originally posted by kebabbert View Post
                        And what will the result be, you suspect?

                        Anyway, I think Linux should focus on bug fixes and slimming the kernel, every other release instead of adding new functionality all the time.
                        I suspect results would vary. But even if they didn't, at least then we could really speak about real kernel performance...

                        Comment


                        • #72
                          Originally posted by misiu_mp View Post
                          But since the only variable is the guest kernel (I assume any background activity of the host was inactivated), there is still value in this comparison. The results are not directly comparable with other benchmarks, but should be quite telling about the differences between the tested kernels.
                          After all the virtual machine is just that - a machine. It has certain performance characteristics, but so do all real machines - and they are all different.
                          You have a point if it could be shown that the given VM configuration (including the host kernel, user space and the virtualization platform) has vastly non-linearly different characteristics from a real hardware machine. For example if assumptions about hardware that were valid when the kernels were written would not be valid on a VM. An example could be disk_performance << dram_performance << sram_performance << hardware_register_performance.
                          Then again we have seen it happening before with real hardware (cpu cache, super-scalar, out-of-order architecture) and are seeing it today (SSD, multiprocessing). So testing on real hardware might not always be straight fair either.
                          It is true, that the only variable is the guest kernel. That is why some of the benchmarks do show significant variations. Problem is, a virtual machine like KVM is really just another userspace application with beefy compute and ram requirements. What is managing the hardware capabilities of the system, is the host kernel.

                          You should ask yourself the question: In what areas i expect the kernel to be improved to increase performance?

                          What could the kernel possibly do to increase performance?

                          You will find out that most of it relates to hardware management optimizations. There are other forms of improvements of course, but it mostly is about hardware, especially ram and disk. This is what reading the kernel changelogs will reveal to you, it mostly is about drivers and filesystems...

                          This is not tested in a virtualized environment, at least not KVM or VirtualBox, You need a larger degree of hardware access for the guest to really affect performance.

                          It is true that results may vary depending on the testing machine, but this is what the kernel is about, utilizing hardware. Apart from changing the allocation of threads and/or i/o, there is nothing for the kernel devs to do except improving drivers and filesystems...

                          Comment


                          • #73
                            If a test use certain hardware, it only tests the drivers for that particular hardware. Unless you run on the same hardware, its not interesting for you. What is interesting in a kernel is the generic subsystems, process and memory management, which is not that dependent on the particular devices in your system and will give improvements in all systems (given similar architecture).
                            KVM also emulates certain hardware (or rather qemu does it) and drivers to that hardware are tested apart from the generic subsystems.
                            As I said, unless the host makes the virtualized machine behave in unexpected ways (e.g. when simulating ram through swap, ignoring fsyncs, making some operations much slower/faster than others), the results are valid. I would like to see some concrete examples of how the KVM differs in behaviour from a real hardware configuration.

                            In many of the compute-intensive tests I have noticed a slight trend of increasing performance - something I would expect from incremental optimizations of the crucial generic kernel subsystems (such as memory management, task switching).

                            It is true that results may vary depending on the testing machine, but this is what the kernel is about, utilizing hardware. Apart from changing the allocation of threads and/or i/o, there is nothing for the kernel devs to do except improving drivers and filesystems...
                            They vary alot! To the point I would consider tests done on a really old hardware quite as worthless as you consider those KVM tests.

                            Comment


                            • #74
                              Originally posted by misiu_mp View Post
                              If a test use certain hardware, it only tests the drivers for that particular hardware. Unless you run on the same hardware, its not interesting for you. What is interesting in a kernel is the generic subsystems, process and memory management, which is not that dependent on the particular devices in your system and will give improvements in all systems (given similar architecture).
                              KVM also emulates certain hardware (or rather qemu does it) and drivers to that hardware are tested apart from the generic subsystems.
                              As I said, unless the host makes the virtualized machine behave in unexpected ways (e.g. when simulating ram through swap, ignoring fsyncs, making some operations much slower/faster than others), the results are valid. I would like to see some concrete examples of how the KVM differs in behaviour from a real hardware configuration.

                              In many of the compute-intensive tests I have noticed a slight trend of increasing performance - something I would expect from incremental optimizations of the crucial generic kernel subsystems (such as memory management, task switching).


                              They vary alot! To the point I would consider tests done on a really old hardware quite as worthless as you consider those KVM tests.
                              Nope. You are wrong.

                              First of all, Michael said he used the same configuration on all kernels, meaning he didn't enable new features that required it explicitely. He also benchmarked ext3 performance only. You do realize this makes a world of difference right?

                              If a program adds a feature in a new version to improve performance, and you don't enable it, then how could you benchmark this improvement?

                              Each kernel has different configuration requirements. It really is much more complicated than you think.

                              Plus,even though guest kernel plays a role, it is the host kernel that is managing the hardware, ram, threads, and I/O.

                              Comment


                              • #75
                                Originally posted by TemplarGR View Post
                                Nope. You are wrong.

                                First of all, Michael said he used the same configuration on all kernels, meaning he didn't enable new features that required it explicitely. He also benchmarked ext3 performance only. You do realize this makes a world of difference right?

                                If a program adds a feature in a new version to improve performance, and you don't enable it, then how could you benchmark this improvement?

                                Each kernel has different configuration requirements. It really is much more complicated than you think.
                                I don't know which part you are referring to as being wrong, but I agree that not enabling optimizing features, that are normally used with certain versions of kernels is wrong for benchmarking general performance. That part has nothing to do with the type of machine it is being tested on though.

                                Originally posted by TemplarGR View Post
                                Plus,even though guest kernel plays a role, it is the host kernel that is managing the hardware, ram, threads, and I/O.
                                Does it really? You say that if the os inside the KVM creates a thread, this thread is actually created in the host and managed by it? Or when a guest process requests memory, it is assigned to the process by the host? That would require that the host knows about the guest's processes.
                                I assumed the guest is *one* simple process with a ton of memory statically assigned to it, and the host doesn't care much about what the guest is doing with it. If the host is otherwise idle, there is little to affect the guest's memory or cpu speed across the different tests.
                                The newest hardware allows the guest even to directly handle page faults.

                                Comment

                                Working...
                                X