Announcement

Collapse
No announcement yet.

Five Years Of Linux Kernel Benchmarks: 2.6.12 Through 2.6.37

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Guest's Avatar
    Guest replied
    Originally posted by V!NCENT View Post
    So what? It was benchmarket in a thread... That's hella better than doing it on real hardware where performance may be hindered by driver implementations...

    A thread runs consistant. Again... the problem? Multithreading? Errr... boohoo!
    In your test you are benchmarking:
    - hardware
    - host kernel space
    - host user space
    - virtualization platform
    - guest kernel space
    - guest user space

    I don't give a flying ship about results. You can do what you want in your free time. Just don't call these benchmarks meaningful

    Leave a comment:


  • V!NCENT
    replied
    So what? It was benchmarket in a thread... That's hella better than doing it on real hardware where performance may be hindered by driver implementations...

    A thread runs consistant. Again... the problem? Multithreading? Errr... boohoo!

    Leave a comment:


  • igf1
    replied
    Originally posted by kebabbert View Post
    Interesting that also, Intel corp, has confirmed that Linux is getting slower and slower. Maybe this is because Linux is getting more and more bloated?

    http://www.theregister.co.uk/2009/09..._bloated_huge/
    "Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.

    "We're getting bloated and huge. Yes, it's a problem," said Linux Torvalds."



    Maybe Linux should focus one release on bug fixes instead of introducing new functionality all the time? Just like Apple did. One of the recent big Mac OX S releases was devoted to only bug fixes and slimming down OS X. Which paid off.


    I conducted a study which concluded the same thing, .24 - .30 was a downward spiral, that, apparently has been addressed to some degree judging by the data here. However, "bloat" is an artifact of non-modular development. The goal is keep the kernel "pure" and allow anyone to bloat their distribution as they see fit.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    Originally posted by TemplarGR View Post
    and if possible using appropriate userspace software for each kernel
    No. Not in _kernel_ test.

    If you want to test kernel, you should not change userspace at all.

    It's difficult, because old kernels doesn't build on modern systems - you even may not be able to boot 2.6.27 (still supported) on latest distros.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by mtippett View Post
    Almost all old kernels would exist in virtual machines in corporate environments these days. The expensive Linux application that you bought or got built 10 years ago that is only supporting and working on Red Hat 7.1 won't be running on the same hardware. That original hardware would have expired a few years after deployment. Most corporate environments then will move those installations to a VM.

    Particularly for the older kernels, it's closer to a real production scenario then most would believe.
    What you say may be true, but doesn't justify the methodology.

    1)As i said, most kernel optimizations are already there, since host's kernel is 2.6.35. It's the host's kernel that is managing the metal, not the guest's. These tests do not show for example the difference of having a host with 2.6.12 kernel and having a host with 2.6.35 kernel.

    2)The userland is too old(since from what i read you used Fedora 4 with all the kernels). For example the graphic stack should be vastly improved. Libraries are old etc.

    3) While VM's are important for businesses. and what you have tested is indeed a usual use case senario, this senario doesn't correctly discribe kernel's evolution in general. What this article says is that it tries to measure kernel's performance through time, in *general*. It should be on real hardware, and if possible using appropriate userspace software for each kernel. I recognize this is difficult and time consuming, but it is the right way to do it.

    Leave a comment:


  • mtippett
    replied
    Originally posted by TemplarGR View Post
    I believe Michael's methodology was flawed. I am surprised none mentioned that these tests were run inside a virtual machine. There are no useful conclusions from these tests:

    1) Host was using almost the latest kernel, so the virtual machine used all current improvements anyway.

    2) Userland from what i understand was still from 2005. I believe in many cases a more up-to-date userland could levereage more from recent kernels in some of the tests.
    Almost all old kernels would exist in virtual machines in corporate environments these days. The expensive Linux application that you bought or got built 10 years ago that is only supporting and working on Red Hat 7.1 won't be running on the same hardware. That original hardware would have expired a few years after deployment. Most corporate environments then will move those installations to a VM.

    Particularly for the older kernels, it's closer to a real production scenario then most would believe.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    Originally posted by TemplarGR View Post
    I believe Michael's methodology was flawed. I am surprised none mentioned that these tests were run inside a virtual machine.
    That's maybe because Phoronix methodoloy is always flawed and nobody takes these results serious?

    At least not kernel devs who knows something about testing methodologies

    Leave a comment:


  • TemplarGR
    replied
    I believe Michael's methodology was flawed. I am surprised none mentioned that these tests were run inside a virtual machine. There are no useful conclusions from these tests:

    1) Host was using almost the latest kernel, so the virtual machine used all current improvements anyway.

    2) Userland from what i understand was still from 2005. I believe in many cases a more up-to-date userland could levereage more from recent kernels in some of the tests.

    A quick way to test this, is using a 2005 era machine, and put Fedora Core 4, benchmark it, and then install Fedora 14(or simply compile the latest kernel if you wish only kernel improvements) and benchmark it again. I strongly believe there will be many differences in results...

    Leave a comment:


  • Jimbo
    replied
    Originally posted by smitty3268 View Post
    I really don't think you'll find very much of that among kernel developers. That's why they are kernel developers and not writing websites or desktop apps, and if you read through the kernel mailing lists i think you'll find that there isn't a whole lot of tolerance there for people who write bad code.
    +1

    The problem is not supporting old hardware or bad kernels developers. As linux says, the problem is that a lot new functionality has been added recently without time to getting stabilized, and this could be dangerous on solid environments: KMS (lots of code), apparmor, responsability patches, VFS, schedulers...

    I believe performance is not currently being compromised, if u try ext4 or xfs with nobarrier mount option, you should find that recent kernels beats ext3 benchmarks easily.

    Leave a comment:


  • MaestroMaus
    replied
    Originally posted by smitty3268 View Post
    I really don't think you'll find very much of that among kernel developers. That's why they are kernel developers and not writing websites or desktop apps, and if you read through the kernel mailing lists i think you'll find that there isn't a whole lot of tolerance there for people who write bad code.
    +1

    The kernel is doing well so far. Most of the bad code and bloat-ware is found in other parts of the software stack.

    Leave a comment:

Working...
X