Announcement

Collapse
No announcement yet.

Red Hat Enterprise Linux 6.0 Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Red Hat Enterprise Linux 6.0 Benchmarks

    Phoronix: Red Hat Enterprise Linux 6.0 Benchmarks

    There's been a number of individuals and organizations asking us about benchmarks of Red Hat Enterprise Linux 6.0, which was released earlier this month and we had benchmarked beta versions of RHEL6 in past months. For those interested in benchmarks of Red Hat's flagship Linux operating system, here are some of our initial benchmarks comparing the official release of Red Hat Enterprise Linux 6.0 to Red Hat Enterprise Linux 5.5, openSUSE, Ubuntu, and Debian.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Could be interesting add in this benchmarks the Oracle's kernel in Oracle "Unbreakable" Linux http://www.oracle.com/us/corporate/press/173453

    Comment


    • #3
      I assume that PTS still uses self-compiled binaries right?
      oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
      oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
      oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
      Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

      Comment


      • #4
        Originally posted by gilboa View Post
        I assume that PTS still uses self-compiled binaries right?
        Yes. And because of this the whole comparison is completely senseless. Phoronix isn't comparing how good the binaries of a distribution are. It is comparing the efficiency of the compiler of a distribution.

        The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.

        Comment


        • #5
          I can't help but I don't see how benchmarks like "lame" "mafft" or the whole compression/decompression benchmarks are relevant for en enterprise distribution.

          They almost exclusivly rely on the quality of generated code (ie compiler benchmarks).

          Comment


          • #6
            Many companies have their own internal-use apps that will be compiled and run on whatever their preferred platform happens to be, or have certain apps that they customize and/or follow upstream development for. Some of these even do a lot of FFT and/or DCT operations (think communications engineering R&D). So while these benchmarks might not represent stereotypical "enterprise" use, they are at least indirectly relevant to a subset of corporate users. I'll agree that the benchmarking could be better-targeted (I doubt many engineers or scientists are sensitive to LAME performance on their work computers), but I think it's going overboard to say that it's completely senseless or irrelevant.

            Comment


            • #7
              Originally posted by glasen View Post
              Yes. And because of this the whole comparison is completely senseless. Phoronix isn't comparing how good the binaries of a distribution are. It is comparing the efficiency of the compiler of a distribution.

              The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.
              I'm a big RHEL user - we use it to deploy our own software stack - which makes part of the comparison - namely kernel performance (large chunk of our software is kernel based) and compiler performance paramount.
              However, the other half of our software stack is standard - E.g. DB, web servers, etc - and we wouldn't -dream- abut using unsupported binaries for these rolls - especially given the huge number of patches included by RH.
              It strikes me that PTS is should, when ever possible, use distribution supplied binaries instead of simply defaulting to self-compiled-binaries. (I believe the same point was raised by the Fedora devs last a thread was started about deploying PTS as a general regression detection tool).

              - Gilboa
              oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
              oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
              oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
              Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

              Comment


              • #8
                This test is joke?! Why author is using RHEL 6 against Desktop distribution which are free for pay, while RHEL 6 cost some money?

                I don't know why there isn't systems like SUSE 11, Ubuntu Server 10.10, CentOS 5.5.

                Comment


                • #9
                  Just pretend that it's CentOS 6 or Scientific Linux 6 instead of RHEL 6. Problem solved.

                  Comment


                  • #10
                    Good test

                    Way I see it, you got a write fast read slow file-system.

                    After installation, right out of the gate you got issues. SELINUX requires extra links into various libraries. So even if you're not running it you're running it. The file-system driver had to be manipulated to include stuff in it.


                    If you actually are running SELINUX you'll see a huge performance hit.
                    Mandatory Access Control on a server sucks.

                    I was surprised to see it perform as well as it did.

                    Comment

                    Working...
                    X