Announcement

Collapse
No announcement yet.

Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

    Phoronix: Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

    Following the release of the first beta for Red Hat Enterprise Linux 6.0 back in April we delivered our first RHEL 6.0 benchmarks while putting it up against CentOS 5.4 and Fedora 12. Now that the second beta of Red Hat Enterprise Linux 6.0 was released last week, we took the workstation build and have benchmarked it against the latest releases of Ubuntu, CentOS, and openSUSE.

    http://www.phoronix.com/vr.php?view=15104

  • #2
    benchmarking enterprise distributions.

    well, i don't know about this, but it feels fundamentally wrong somehow.

    Comment


    • #3
      Originally posted by yoshi314 View Post
      benchmarking enterprise distributions.

      well, i don't know about this, but it feels fundamentally wrong somehow.
      Why? I like it. Altough a few more HPC calculation benchmarks is needed.

      Comment


      • #4
        Originally posted by yoshi314 View Post
        benchmarking enterprise distributions.

        well, i don't know about this, but it feels fundamentally wrong somehow.
        Nothing wrong with making money out of opensource software.

        Comment


        • #5
          Once again, a useless comparison from Phoronix. All systems have different kernels, different xorg-server, different DEs etc. You are comparing apples and oranges.

          But enough with that. What's more disappointing about this article is that it lacks commentary/conclusion. If you are not willing to investigate why X performs better than Y, you shouldn't bother writing the article. Maybe Y has more background apps than X? Maybe there is a known regression in Y's kernel?

          Once again, you (Michael), wrote an article without including any analysis whatsoever, decreasing it's value to "crap".

          Phoronix used to be better than this...

          Comment


          • #6
            Originally posted by dcc24 View Post
            Once again, a useless comparison from Phoronix. All systems have different kernels, different xorg-server, different DEs etc. You are comparing apples and oranges.

            But enough with that. What's more disappointing about this article is that it lacks commentary/conclusion. If you are not willing to investigate why X performs better than Y, you shouldn't bother writing the article. Maybe Y has more background apps than X? Maybe there is a known regression in Y's kernel?

            Once again, you (Michael), wrote an article without including any analysis whatsoever, decreasing it's value to "crap".

            Phoronix used to be better than this...
            Every benchmark he does seems to elicit someone making this sort of comment.

            Truth: it would be better (for the long-term benefit of the products he is reviewing) if he would spend many hours/days figuring out exactly what components were directly responsible for quantitative deltas between executing the same code, and what could be done -- either to the application or to the platform -- to make it run faster.

            Falsehood: It is the responsibility of a journalist to do this.

            When someone gets murdered, journalists don't pull out the fingerprint dust, black lights, and security camera video tapes. They tell the public what was observed to happen. They let the police figure out exactly what the cause of the happenings was. It's not their job to figure out "if the person had ducked exactly 333 msec before they did, the bullet would have missed them."


            Truth: People don't run benchmarks on their computer all day, so telling someone that they can expect higher performance with a TTISOD benchmark is not directly applicable to real world experience with actually-useful applications. Furthermore, your mileage will vary dramatically depending on what library calls are made in your particular app; how carefully your app is optimized (does it pay attention to locality, cache coherency, take advantage of SSE, etc), etc. Your experience may also differ on a different architecture or hardware.

            Falsehood: There is no value in reporting benchmark results.

            At least some of Michael's benchmarks do, in fact, reflect real-world scenarios to some degree. If you have an extremely busy website serving static pages, and you use Apache, that benchmark at least will give you some meaningful data on which system is fastest on Michael's hardware. The Apache benchmark is not very synthetic; it's more of a stress-test of a very widely-used app. By contrast, the more artificial benchmarks have admittedly less significance. If you're using similar hardware, OS version and architecture as Michael, the benchmark really ought to be reproducible, at least within a 10 or 20% margin of error. For some of the tests where the results were dramatically different between distros, this margin of error isn't enough to invalidate, at least, the ordering of the distros. Dramatic differences usually are symptoms of major system design changes (such as from the 2.6.18 kernel all the way up to the modern 2.6.32) or of major performance regressions (TTSIOD slow on CentOS and RHEL? WTF?)


            I agree that it would be more useful for software engineers and project contributors to know the how and why of these, but if the performance is really that glaringly bad on a particular distro, the least the article can do is entice a contributor to run the test, reproduce the relatively poor results, then dig in and figure out why.

            Don't equate "non-ideal" with "crap". Michael has finite time and is providing something with positive value, no matter how limited it may be (especially when the tests are "close" -- differences of <10% could be due to ANYTHING). That is more than can be said of many people.

            Comment


            • #7
              Originally posted by allquixotic View Post
              Don't equate "non-ideal" with "crap". Michael has finite time and is providing something with positive value, no matter how limited it may be (especially when the tests are "close" -- differences of <10% could be due to ANYTHING). That is more than can be said of many people.
              Fair enough, "crap" may be too harsh here. Nevertheless, these results that are published are merely "observations", not "benchmarks". Here's what wikipedia say:

              Benchmarking is not easy and often involves several iterative rounds in order to arrive at predictable, useful conclusions.
              Now, he only ran these tests once (he didn't say otherwise), there are no conclusions mentioned, no comparison made etc. What the article does, is merely displaying tabular data. This is not benchmarking.

              Comment


              • #8
                Must not forget the fact that while beta, debug code is still slowing it all down, therefore though it may be fun to benchmark, the results really have no meaning.... at least not with respect to rhel6.

                I think a more interesting focus when looking at betas with their heaping piles of debug code, is to look at FUNCTIONALITY and to count bugs.

                Comment


                • #9
                  @dcc24: PTS runs each benchmark a min of 3 times (IIRC), more if they deviate enough.

                  Comment


                  • #10
                    Originally posted by dcc24 View Post
                    Once again, a useless comparison from Phoronix. All systems have different kernels, different xorg-server, different DEs etc. You are comparing apples and oranges.

                    But enough with that. What's more disappointing about this article is that it lacks commentary/conclusion. If you are not willing to investigate why X performs better than Y, you shouldn't bother writing the article. Maybe Y has more background apps than X? Maybe there is a known regression in Y's kernel?

                    Once again, you (Michael), wrote an article without including any analysis whatsoever, decreasing it's value to "crap".

                    Phoronix used to be better than this...
                    The comparison is actually useful for people who are deciding which enterprise OS to use.

                    Comparing apples to apples as you call it doesn't make much sense. Not many people are going to install CentOS and update the kernel and X.org in the real world. But it's useful to see if there are any performance benefits for CentOS 5.5 users to upgrade to what will be CentOS 6 (or RHEL 6)

                    Comment


                    • #11
                      Again, the data presented in this article would be misleading if someone is to decide which enterprise OS to use, looking at this article. The debugging code alone is reason enough not to compare them.

                      The only real benchmark is running the application itself (the one you will be using in a production environment) on both OSes which are properly updated and configured. If there are still differences in performance - that are not caused by upstream - then and only then, you can say "distro X performs better than distro Y".

                      Comment


                      • #12
                        ffmpeg: Ubuntu 1% faster than opensuse and the winner.
                        7-zip: suse 2% faster than Ubuntu and 'virtualle the same'

                        The bias is very hard to miss.

                        Comment


                        • #13
                          I really wish Michael would either post the PTS results on the tracker or give details on the package selection. openSUSE by default installs the desktop kernel instead of the more server oriented -default kernel.

                          Comment


                          • #14
                            ffmpeg: Ubuntu 1% faster than opensuse and the winner.
                            7-zip: suse 2% faster than Ubuntu and 'virtualle the same'

                            The bias is very hard to miss.
                            Either that or Michael wrote it that way because the difference between the best and worst cases in the 7-zip was 4% while it was 13% in the ffmpeg-benchmark.

                            Comment


                            • #15
                              well it happens in almost every article. If someone is faster than Ubuntu you have a good chance to find a 'virtually the same'. But with ubuntu leading...

                              Comment

                              Working...
                              X