Page 1 of 3 123 LastLast
Results 1 to 10 of 24

Thread: Ubuntu, Fedora, Mandriva Performance Compared

  1. #1
    Join Date
    Jan 2007
    Posts
    14,354

    Default Ubuntu, Fedora, Mandriva Performance Compared

    Phoronix: Ubuntu, Fedora, Mandriva Performance Compared

    Last week we released Phoronix Test Suite 1.0 and one of the article requests we received as a result was to do a side-by-side comparison between the popular desktop Linux distributions. Ask and you shall receive. Today we have up 28 test results from Ubuntu 8.04, Fedora 9, and Mandriva 2008.1.

    http://www.phoronix.com/vr.php?view=12438

  2. #2
    Join Date
    Oct 2007
    Posts
    34

    Default

    Well, since these benchmarks are mostly about the hardware and the kernel, it's not a big surprise that all perform quite similarly.

    It's a pity that openSUSE is not on the test, because it's the only one that enables barriers on the filesystem by default and it would be nice to measure their cost, which in I/O bound tests can be up to 30% in my experience. (There was a recent thread on lkml to enable them by default in ext3, but Andrew Morton was opposing because of these performance cost. I'm not sure how it all ended up, but I saw a commit a couple of days ago to enable them by default in ext4, so in the long run that would be the default anyway).

  3. #3
    Join Date
    Feb 2008
    Posts
    15

    Default

    I think what is most interesting to me in these results is the difference in compilation times, especially for the kernel. I guess, in that case, one could argue that there is simply more source to compile for 2.6.25 than for 2.6.24, or that Fedora 9's gcc is slower than Ubuntu 8.04's. The imagemagick compile time is also interesting.

    Anyone have more in-depth insight?

  4. #4
    Join Date
    Dec 2007
    Posts
    677

    Default

    Mandriva? I'm suprised. I thought it was Ubuntu, OpenSuse, and Fedora.

  5. #5

    Default

    Quote Originally Posted by Vadi View Post
    Mandriva? I'm suprised. I thought it was Ubuntu, OpenSuse, and Fedora.
    OpenSuSE 11.0 RC1 was having some issues.

  6. #6
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,582

    Default

    Quote Originally Posted by Luis View Post
    Well, since these benchmarks are mostly about the hardware and the kernel, it's not a big surprise that all perform quite similarly.

    It's a pity that openSUSE is not on the test, because it's the only one that enables barriers on the filesystem by default and it would be nice to measure their cost, which in I/O bound tests can be up to 30% in my experience. (There was a recent thread on lkml to enable them by default in ext3, but Andrew Morton was opposing because of these performance cost. I'm not sure how it all ended up, but I saw a commit a couple of days ago to enable them by default in ext4, so in the long run that would be the default anyway).
    barrier has little to no effect in my experience on XFS < 1% (you can choose what filesystem you want to use in opensuse.) what can really kill the system in XFS are the settings.

    noatime - last access time is not recorded (file/dir)
    biosize - sets the default I/O size
    logbufs - specifies in-memory log buffers
    logbsize - size of the buffer

    Changing them in fstab on my systems dropped my sqlite tests from 59 seconds to < 2 seconds and throughput jumped by 13 MB/s

  7. #7
    Join Date
    Oct 2007
    Posts
    34

    Default

    Quote Originally Posted by deanjo View Post
    barrier has little to no effect in my experience on XFS < 1%
    XFS has barrier on by default (unlike ext3), but it's really strange that it has so little effect for you. Maybe your hardware doesn't support it and disables it? This is not a rare case and you should see some message about it (probably in dmesg?).

    On I/O intensive tasks, the performance difference should be noticeable (it's a trade off for increased safety, otherwise everyone would enable them). A simple "tar -xjf linux-2.6.25.4.tar.bz2" should reveal the difference when mounting with default vs. using option "nobarrier" (for XFS) in /etc/fstab.

    Anyone who wants to test with ext3, the option is "barrier=0" (disable) or "barrier=1" (enable).

  8. #8
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,582

    Default

    Quote Originally Posted by Luis View Post
    XFS has barrier on by default (unlike ext3), but it's really strange that it has so little effect for you. Maybe your hardware doesn't support it and disables it? This is not a rare case and you should see some message about it (probably in dmesg?).

    On I/O intensive tasks, the performance difference should be noticeable (it's a trade off for increased safety, otherwise everyone would enable them). A simple "tar -xjf linux-2.6.25.4.tar.bz2" should reveal the difference when mounting with default vs. using option "nobarrier" (for XFS) in /etc/fstab.

    Anyone who wants to test with ext3, the option is "barrier=0" (disable) or "barrier=1" (enable).
    Very doubtful that my hardware does not support barrier. They are seagate 7200.11 series drives and the same thing is observed on enterprise servers that are using ES.2 series drives that I run @ work.

  9. #9
    Join Date
    Oct 2007
    Posts
    34

    Default

    Quote Originally Posted by deanjo View Post
    Very doubtful that my hardware does not support barrier. They are seagate 7200.11 series drives and the same thing is observed on enterprise servers that are using ES.2 series drives that I run @ work.
    Well, the cost is known and accepted, so I don't know why you don't notice it in your setup. See this thread where the regression was reported (when they were enabled by default in 2.6.17), and the developer answers that the regression is just "as expected".

  10. #10
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,582

    Default

    Quote Originally Posted by Luis View Post
    Well, the cost is known and accepted, so I don't know why you don't notice it in your setup. See this thread where the regression was reported (when they were enabled by default in 2.6.17), and the developer answers that the regression is just "as expected".

    That's a pretty old post. Here are a couple of benchmarks with barrier enabled. Top one is a couple of old Maxtor 6L250S0 250 Gig drives with barriers enabled, the one below are Seagate 7200.11 500 Gig drives. Both are running Raid 0 on a dmraid setup.

    http://global.phoronix-test-suite.co...97-27642-20171

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •