Announcement

Collapse
No announcement yet.

Ubuntu, Fedora, Mandriva Performance Compared

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ubuntu, Fedora, Mandriva Performance Compared

    Phoronix: Ubuntu, Fedora, Mandriva Performance Compared

    Last week we released Phoronix Test Suite 1.0 and one of the article requests we received as a result was to do a side-by-side comparison between the popular desktop Linux distributions. Ask and you shall receive. Today we have up 28 test results from Ubuntu 8.04, Fedora 9, and Mandriva 2008.1.

    http://www.phoronix.com/vr.php?view=12438

  • #2
    Well, since these benchmarks are mostly about the hardware and the kernel, it's not a big surprise that all perform quite similarly.

    It's a pity that openSUSE is not on the test, because it's the only one that enables barriers on the filesystem by default and it would be nice to measure their cost, which in I/O bound tests can be up to 30% in my experience. (There was a recent thread on lkml to enable them by default in ext3, but Andrew Morton was opposing because of these performance cost. I'm not sure how it all ended up, but I saw a commit a couple of days ago to enable them by default in ext4, so in the long run that would be the default anyway).

    Comment


    • #3
      I think what is most interesting to me in these results is the difference in compilation times, especially for the kernel. I guess, in that case, one could argue that there is simply more source to compile for 2.6.25 than for 2.6.24, or that Fedora 9's gcc is slower than Ubuntu 8.04's. The imagemagick compile time is also interesting.

      Anyone have more in-depth insight?

      Comment


      • #4
        Mandriva? I'm suprised. I thought it was Ubuntu, OpenSuse, and Fedora.

        Comment


        • #5
          Originally posted by Vadi View Post
          Mandriva? I'm suprised. I thought it was Ubuntu, OpenSuse, and Fedora.
          OpenSuSE 11.0 RC1 was having some issues.
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Originally posted by Luis View Post
            Well, since these benchmarks are mostly about the hardware and the kernel, it's not a big surprise that all perform quite similarly.

            It's a pity that openSUSE is not on the test, because it's the only one that enables barriers on the filesystem by default and it would be nice to measure their cost, which in I/O bound tests can be up to 30% in my experience. (There was a recent thread on lkml to enable them by default in ext3, but Andrew Morton was opposing because of these performance cost. I'm not sure how it all ended up, but I saw a commit a couple of days ago to enable them by default in ext4, so in the long run that would be the default anyway).
            barrier has little to no effect in my experience on XFS < 1% (you can choose what filesystem you want to use in opensuse.) what can really kill the system in XFS are the settings.

            noatime - last access time is not recorded (file/dir)
            biosize - sets the default I/O size
            logbufs - specifies in-memory log buffers
            logbsize - size of the buffer

            Changing them in fstab on my systems dropped my sqlite tests from 59 seconds to < 2 seconds and throughput jumped by 13 MB/s

            Comment


            • #7
              Originally posted by deanjo View Post
              barrier has little to no effect in my experience on XFS < 1%
              XFS has barrier on by default (unlike ext3), but it's really strange that it has so little effect for you. Maybe your hardware doesn't support it and disables it? This is not a rare case and you should see some message about it (probably in dmesg?).

              On I/O intensive tasks, the performance difference should be noticeable (it's a trade off for increased safety, otherwise everyone would enable them). A simple "tar -xjf linux-2.6.25.4.tar.bz2" should reveal the difference when mounting with default vs. using option "nobarrier" (for XFS) in /etc/fstab.

              Anyone who wants to test with ext3, the option is "barrier=0" (disable) or "barrier=1" (enable).

              Comment


              • #8
                Originally posted by Luis View Post
                XFS has barrier on by default (unlike ext3), but it's really strange that it has so little effect for you. Maybe your hardware doesn't support it and disables it? This is not a rare case and you should see some message about it (probably in dmesg?).

                On I/O intensive tasks, the performance difference should be noticeable (it's a trade off for increased safety, otherwise everyone would enable them). A simple "tar -xjf linux-2.6.25.4.tar.bz2" should reveal the difference when mounting with default vs. using option "nobarrier" (for XFS) in /etc/fstab.

                Anyone who wants to test with ext3, the option is "barrier=0" (disable) or "barrier=1" (enable).
                Very doubtful that my hardware does not support barrier. They are seagate 7200.11 series drives and the same thing is observed on enterprise servers that are using ES.2 series drives that I run @ work.

                Comment


                • #9
                  Originally posted by deanjo View Post
                  Very doubtful that my hardware does not support barrier. They are seagate 7200.11 series drives and the same thing is observed on enterprise servers that are using ES.2 series drives that I run @ work.
                  Well, the cost is known and accepted, so I don't know why you don't notice it in your setup. See this thread where the regression was reported (when they were enabled by default in 2.6.17), and the developer answers that the regression is just "as expected".

                  Comment


                  • #10
                    Originally posted by Luis View Post
                    Well, the cost is known and accepted, so I don't know why you don't notice it in your setup. See this thread where the regression was reported (when they were enabled by default in 2.6.17), and the developer answers that the regression is just "as expected".

                    That's a pretty old post. Here are a couple of benchmarks with barrier enabled. Top one is a couple of old Maxtor 6L250S0 250 Gig drives with barriers enabled, the one below are Seagate 7200.11 500 Gig drives. Both are running Raid 0 on a dmraid setup.

                    http://global.phoronix-test-suite.co...97-27642-20171

                    Comment


                    • #11
                      Originally posted by Rhettigan View Post
                      I think what is most interesting to me in these results is the difference in compilation times, especially for the kernel. I guess, in that case, one could argue that there is simply more source to compile for 2.6.25 than for 2.6.24, or that Fedora 9's gcc is slower than Ubuntu 8.04's.
                      Uhm, the kernel that is compiled is identical for all tests. The only "real" difference is the kernel of the underlying system as well as the compiler used. And it is *very* likely that gcc 4.3 is slower than 4.2.x but probably it also produces more/better optimized binary code.

                      Comment


                      • #12
                        Thanks for a fantastic job Michael. Just a little nitpick, please make clear whether OS and/or binaries are 64-bit.

                        Comment


                        • #13
                          Thanks for the benchmarks Michael..
                          As you said, what matters more is what features you require from a distro
                          I use Mandriva, and I am glad with it.

                          Comment


                          • #14
                            wow. Fedora, Ubuntu, and something that isn't OpenSolaris? I'm impressed. And happy. I cut my Linux teeth on Mandrake, and it's still my Plan B for desktop usage.

                            Comment


                            • #15
                              Originally posted by deanjo View Post
                              That's a pretty old post. Here are a couple of benchmarks with barrier enabled. Top one is a couple of old Maxtor 6L250S0 250 Gig drives with barriers enabled, the one below are Seagate 7200.11 500 Gig drives. Both are running Raid 0 on a dmraid setup.

                              http://global.phoronix-test-suite.co...97-27642-20171
                              Are you certain that barriers are supported in that setup? I'm inclined to think they aren't. You should check your dmesg for any messages about it.

                              Besides, there is nothing to compare against, so it's hard to say how much barriers are hurting performance.

                              Comment

                              Working...
                              X