Announcement

Collapse
No announcement yet.

Red Hat Enterprise Linux 6.0 Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Red Hat Enterprise Linux 6.0 Benchmarks

    Phoronix: Red Hat Enterprise Linux 6.0 Benchmarks

    There's been a number of individuals and organizations asking us about benchmarks of Red Hat Enterprise Linux 6.0, which was released earlier this month and we had benchmarked beta versions of RHEL6 in past months. For those interested in benchmarks of Red Hat's flagship Linux operating system, here are some of our initial benchmarks comparing the official release of Red Hat Enterprise Linux 6.0 to Red Hat Enterprise Linux 5.5, openSUSE, Ubuntu, and Debian.

    http://www.phoronix.com/vr.php?view=15510

  • #2
    Could be interesting add in this benchmarks the Oracle's kernel in Oracle "Unbreakable" Linux http://www.oracle.com/us/corporate/press/173453

    Comment


    • #3
      I assume that PTS still uses self-compiled binaries right?
      DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
      SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
      BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
      LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

      Comment


      • #4
        Originally posted by gilboa View Post
        I assume that PTS still uses self-compiled binaries right?
        Yes. And because of this the whole comparison is completely senseless. Phoronix isn't comparing how good the binaries of a distribution are. It is comparing the efficiency of the compiler of a distribution.

        The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.

        Comment


        • #5
          I can't help but I don't see how benchmarks like "lame" "mafft" or the whole compression/decompression benchmarks are relevant for en enterprise distribution.

          They almost exclusivly rely on the quality of generated code (ie compiler benchmarks).

          Comment


          • #6
            Many companies have their own internal-use apps that will be compiled and run on whatever their preferred platform happens to be, or have certain apps that they customize and/or follow upstream development for. Some of these even do a lot of FFT and/or DCT operations (think communications engineering R&D). So while these benchmarks might not represent stereotypical "enterprise" use, they are at least indirectly relevant to a subset of corporate users. I'll agree that the benchmarking could be better-targeted (I doubt many engineers or scientists are sensitive to LAME performance on their work computers), but I think it's going overboard to say that it's completely senseless or irrelevant.

            Comment


            • #7
              Originally posted by glasen View Post
              Yes. And because of this the whole comparison is completely senseless. Phoronix isn't comparing how good the binaries of a distribution are. It is comparing the efficiency of the compiler of a distribution.

              The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.
              I'm a big RHEL user - we use it to deploy our own software stack - which makes part of the comparison - namely kernel performance (large chunk of our software is kernel based) and compiler performance paramount.
              However, the other half of our software stack is standard - E.g. DB, web servers, etc - and we wouldn't -dream- abut using unsupported binaries for these rolls - especially given the huge number of patches included by RH.
              It strikes me that PTS is should, when ever possible, use distribution supplied binaries instead of simply defaulting to self-compiled-binaries. (I believe the same point was raised by the Fedora devs last a thread was started about deploying PTS as a general regression detection tool).

              - Gilboa
              DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
              SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
              BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
              LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

              Comment


              • #8
                This test is joke?! Why author is using RHEL 6 against Desktop distribution which are free for pay, while RHEL 6 cost some money?

                I don't know why there isn't systems like SUSE 11, Ubuntu Server 10.10, CentOS 5.5.

                Comment


                • #9
                  Just pretend that it's CentOS 6 or Scientific Linux 6 instead of RHEL 6. Problem solved.

                  Comment


                  • #10
                    Good test

                    Way I see it, you got a write fast read slow file-system.

                    After installation, right out of the gate you got issues. SELINUX requires extra links into various libraries. So even if you're not running it you're running it. The file-system driver had to be manipulated to include stuff in it.


                    If you actually are running SELINUX you'll see a huge performance hit.
                    Mandatory Access Control on a server sucks.

                    I was surprised to see it perform as well as it did.

                    Comment


                    • #11
                      If you actually are running SELINUX you'll see a huge performance hit.
                      I'm seeing a 1-5% performance hit due to SELinux on my Fedora, CentOS and RHEL platforms.
                      Care to share benchmark figures?
                      DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
                      SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
                      BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
                      LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

                      Comment


                      • #12
                        Originally posted by Ex-Cyber View Post
                        Many companies have their own internal-use apps that will be compiled and run on whatever their preferred platform happens to be ..... but I think it's going overboard to say that it's completely senseless or irrelevant.
                        Most of the results are from compiler benchmarks which depend mostly on the quality of generated code by the compiler, but not a lot on other factors.
                        Its like using SuperPI to compare Windows95 to Windows-7. Sure qou'll get some numbers out, but you're not really benchmarking the operating system.

                        I am not saying compiler performance is irrelevant, in the contrary I think the compiler is a critical part. Its just not the only component, and hasn't a whole lot to do with the operating system.

                        - Clemens

                        Comment


                        • #13
                          Fixed the O_SYNC problem?

                          It is easy to get good performance when you cheat and make fast, unsafe hacks. Have they yet fixed those problems in Linux?

                          http://milek.blogspot.com/2010/12/li...-barriers.html
                          "This is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don't even bother to tell you about it in official documentation (at least the O_SYNC/O_DSYNC issue is documented in the man page now)."

                          Comment


                          • #14
                            Originally posted by kebabbert View Post
                            It is easy to get good performance when you cheat and make fast, unsafe hacks. Have they yet fixed those problems in Linux?

                            http://milek.blogspot.com/2010/12/li...-barriers.html
                            "This is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don't even bother to tell you about it in official documentation (at least the O_SYNC/O_DSYNC issue is documented in the man page now)."
                            How about just reading the comments under the blog post you're actually linking to? The answer is right there. But then again, despite formulating this post as a question, you're not really posting this link to get an an answer - you're doing it because you think you're scoring points in your misguided "praise Solaris by badmouthing Linux" crusade. As usual, you just manage to make yourself look bad by obviously not actually understanding the technical content of the posts you're linking to, and furthermore by "asking" something that is already answered in the commentary in the very link you posted.


                            Nonetheless, to summarize here, since you're too lazy to read your own links: yes, O_SYNC is now POSIX compliant in Linux.

                            Adding to this, and making a couple of points that weren't made in the commentary to the above blog post:

                            1. O_SYNC == O_DSYNC really was not that big a deal. The worst that could happen is that if your system crashes, you MIGHT end up with timestamps for a file not being updated. That's all. The reason it took so long to get fixed was pretty much that almost no one cared. Incidentally, AIX also defines O_SYNC = O_DSYNC by default, though this can be changed by setting an environment variable.

                            2. For those who absolutely needed it, proper O_SYNC was available in Linux before, as well, by picking the right filesystem (xfs) with the osyncisosync mount option.

                            In closing, as I've tried to point out before, both Solaris and Linux are excellent systems, but neither are perfect. It's easy enough to dig up ugly bugs and deficiencies in either, if you deliberately go looking for it. This is the only reason that I include this link:

                            http://blog.lastinfirstout.net/2010/...oss-still.html

                            For those too lazy to follow the link: fsync was broken in ZFS / Solaris 10 until April this year.

                            That's two months _after_ Linux implemented full O_SYNC support in the generic layer. Moreover - Linux O_SYNC == O_DSYNC equivalence was clearly documented. The ZFS behavior was not.

                            Does this somehow "prove" that Solaris/ZFS sucks? Of course not - no more than your link "proves" anything about Linux. However, you may want to be a bit more careful about throwing this particular alarmist blog post around, since anyone choosing to play the "dueling OS bugs" game can so easily counter this particular post with a more recent and uglier flaw in the OS/Filesystem you put so much passion into promoting.

                            But that's likely the case for just about any such post/bug reference in either direction, and that, in fact, was my main point.

                            Comment


                            • #15
                              TheOrqWithVagrant

                              If you look at the dates, I posted my link early. Back then there were no relevant comments from linux people on the link, all of the relevant comments are new. Just look at the date and you will see. In short, there were no sane comments when I posted my link. The comments you refer to, are new.

                              But that is not the problem. The problem is that Linux deliberately cuts corners and cheat. THAT is a problem. Linux does not obey standards but cheats to get good benchmarks. Not following standards is a bad thing.


                              Then you show a post where ZFS had a problem. So? That problem is considered as a bug, and it is not a design choice by ZFS engineers. ZFS is for enterprise. ZFS must adhere to Enterprise standards to provide data integrity. If ZFS does not, it is considered as a bug. Not at an active design choice.

                              The main point is that Linux - by design - cheats and cuts corners. ZFS had a bug, everybody has bugs. Linux has lots of bugs, which you are surely aware of. But Solaris is not cheating by design.

                              To summarize: Linux - by design - cheats and dont follow standards to get good benchmarks. Solaris follows standards and does not cheat (but Solaris might have bugs). FYI, Linux also has bugs.



                              Regarding my bad mouthing of Linux, so what? It pisses me off when Linux people bad mouthes Solaris. So I just balance them. If the Linux fanbois stopped bad mouth Solaris, I would also stop. I show a re-action. Not action. Linux people acts - I re-act on their actions. Is that a problem that I react on Linux fanboys badmouthing of Solaris?

                              Comment

                              Working...
                              X