Announcement

Collapse
No announcement yet.

Ubuntu, Fedora, Mandriva Performance Compared

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jeffro-tull
    replied
    wow. Fedora, Ubuntu, and something that isn't OpenSolaris? I'm impressed. And happy. I cut my Linux teeth on Mandrake, and it's still my Plan B for desktop usage.

    Leave a comment:


  • Extreme Coder
    replied
    Thanks for the benchmarks Michael..
    As you said, what matters more is what features you require from a distro
    I use Mandriva, and I am glad with it.

    Leave a comment:


  • Del_
    replied
    Thanks for a fantastic job Michael. Just a little nitpick, please make clear whether OS and/or binaries are 64-bit.

    Leave a comment:


  • ivanovic
    replied
    Originally posted by Rhettigan View Post
    I think what is most interesting to me in these results is the difference in compilation times, especially for the kernel. I guess, in that case, one could argue that there is simply more source to compile for 2.6.25 than for 2.6.24, or that Fedora 9's gcc is slower than Ubuntu 8.04's.
    Uhm, the kernel that is compiled is identical for all tests. The only "real" difference is the kernel of the underlying system as well as the compiler used. And it is *very* likely that gcc 4.3 is slower than 4.2.x but probably it also produces more/better optimized binary code.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Luis View Post
    Well, the cost is known and accepted, so I don't know why you don't notice it in your setup. See this thread where the regression was reported (when they were enabled by default in 2.6.17), and the developer answers that the regression is just "as expected".

    That's a pretty old post. Here are a couple of benchmarks with barrier enabled. Top one is a couple of old Maxtor 6L250S0 250 Gig drives with barriers enabled, the one below are Seagate 7200.11 500 Gig drives. Both are running Raid 0 on a dmraid setup.

    Leave a comment:


  • Luis
    replied
    Originally posted by deanjo View Post
    Very doubtful that my hardware does not support barrier. They are seagate 7200.11 series drives and the same thing is observed on enterprise servers that are using ES.2 series drives that I run @ work.
    Well, the cost is known and accepted, so I don't know why you don't notice it in your setup. See this thread where the regression was reported (when they were enabled by default in 2.6.17), and the developer answers that the regression is just "as expected".

    Leave a comment:


  • deanjo
    replied
    Originally posted by Luis View Post
    XFS has barrier on by default (unlike ext3), but it's really strange that it has so little effect for you. Maybe your hardware doesn't support it and disables it? This is not a rare case and you should see some message about it (probably in dmesg?).

    On I/O intensive tasks, the performance difference should be noticeable (it's a trade off for increased safety, otherwise everyone would enable them). A simple "tar -xjf linux-2.6.25.4.tar.bz2" should reveal the difference when mounting with default vs. using option "nobarrier" (for XFS) in /etc/fstab.

    Anyone who wants to test with ext3, the option is "barrier=0" (disable) or "barrier=1" (enable).
    Very doubtful that my hardware does not support barrier. They are seagate 7200.11 series drives and the same thing is observed on enterprise servers that are using ES.2 series drives that I run @ work.

    Leave a comment:


  • Luis
    replied
    Originally posted by deanjo View Post
    barrier has little to no effect in my experience on XFS < 1%
    XFS has barrier on by default (unlike ext3), but it's really strange that it has so little effect for you. Maybe your hardware doesn't support it and disables it? This is not a rare case and you should see some message about it (probably in dmesg?).

    On I/O intensive tasks, the performance difference should be noticeable (it's a trade off for increased safety, otherwise everyone would enable them). A simple "tar -xjf linux-2.6.25.4.tar.bz2" should reveal the difference when mounting with default vs. using option "nobarrier" (for XFS) in /etc/fstab.

    Anyone who wants to test with ext3, the option is "barrier=0" (disable) or "barrier=1" (enable).

    Leave a comment:


  • deanjo
    replied
    Originally posted by Luis View Post
    Well, since these benchmarks are mostly about the hardware and the kernel, it's not a big surprise that all perform quite similarly.

    It's a pity that openSUSE is not on the test, because it's the only one that enables barriers on the filesystem by default and it would be nice to measure their cost, which in I/O bound tests can be up to 30% in my experience. (There was a recent thread on lkml to enable them by default in ext3, but Andrew Morton was opposing because of these performance cost. I'm not sure how it all ended up, but I saw a commit a couple of days ago to enable them by default in ext4, so in the long run that would be the default anyway).
    barrier has little to no effect in my experience on XFS < 1% (you can choose what filesystem you want to use in opensuse.) what can really kill the system in XFS are the settings.

    noatime - last access time is not recorded (file/dir)
    biosize - sets the default I/O size
    logbufs - specifies in-memory log buffers
    logbsize - size of the buffer

    Changing them in fstab on my systems dropped my sqlite tests from 59 seconds to < 2 seconds and throughput jumped by 13 MB/s

    Leave a comment:


  • Michael
    replied
    Originally posted by Vadi View Post
    Mandriva? I'm suprised. I thought it was Ubuntu, OpenSuse, and Fedora.
    OpenSuSE 11.0 RC1 was having some issues.

    Leave a comment:

Working...
X