Good test
Way I see it, you got a write fast read slow file-system.
After installation, right out of the gate you got issues. SELINUX requires extra links into various libraries. So even if you're not running it you're running it. The file-system driver had to be manipulated to include stuff in it.
If you actually are running SELINUX you'll see a huge performance hit.
Mandatory Access Control on a server sucks.
I was surprised to see it perform as well as it did.
Announcement
Collapse
No announcement yet.
Red Hat Enterprise Linux 6.0 Benchmarks
Collapse
X
-
Just pretend that it's CentOS 6 or Scientific Linux 6 instead of RHEL 6. Problem solved.
Leave a comment:
-
This test is joke?! Why author is using RHEL 6 against Desktop distribution which are free for pay, while RHEL 6 cost some money?
I don't know why there isn't systems like SUSE 11, Ubuntu Server 10.10, CentOS 5.5.
Leave a comment:
-
Originally posted by glasen View PostYes. And because of this the whole comparison is completely senseless. Phoronix isn't comparing how good the binaries of a distribution are. It is comparing the efficiency of the compiler of a distribution.
The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.
However, the other half of our software stack is standard - E.g. DB, web servers, etc - and we wouldn't -dream- abut using unsupported binaries for these rolls - especially given the huge number of patches included by RH.
It strikes me that PTS is should, when ever possible, use distribution supplied binaries instead of simply defaulting to self-compiled-binaries. (I believe the same point was raised by the Fedora devs last a thread was started about deploying PTS as a general regression detection tool).
- Gilboa
Leave a comment:
-
Many companies have their own internal-use apps that will be compiled and run on whatever their preferred platform happens to be, or have certain apps that they customize and/or follow upstream development for. Some of these even do a lot of FFT and/or DCT operations (think communications engineering R&D). So while these benchmarks might not represent stereotypical "enterprise" use, they are at least indirectly relevant to a subset of corporate users. I'll agree that the benchmarking could be better-targeted (I doubt many engineers or scientists are sensitive to LAME performance on their work computers), but I think it's going overboard to say that it's completely senseless or irrelevant.
Leave a comment:
-
I can't help but I don't see how benchmarks like "lame" "mafft" or the whole compression/decompression benchmarks are relevant for en enterprise distribution.
They almost exclusivly rely on the quality of generated code (ie compiler benchmarks).
Leave a comment:
-
Originally posted by gilboa View PostI assume that PTS still uses self-compiled binaries right?
The problem with PTS is that it is very good benchmark when you want to know how fast or slow specific systems are on a common distribution. It is useless when you want to know how fast a distribution is compared to another on the same hardware.
Leave a comment:
-
Could be interesting add in this benchmarks the Oracle's kernel in Oracle "Unbreakable" Linux http://www.oracle.com/us/corporate/press/173453
Leave a comment:
-
Red Hat Enterprise Linux 6.0 Benchmarks
Phoronix: Red Hat Enterprise Linux 6.0 Benchmarks
There's been a number of individuals and organizations asking us about benchmarks of Red Hat Enterprise Linux 6.0, which was released earlier this month and we had benchmarked beta versions of RHEL6 in past months. For those interested in benchmarks of Red Hat's flagship Linux operating system, here are some of our initial benchmarks comparing the official release of Red Hat Enterprise Linux 6.0 to Red Hat Enterprise Linux 5.5, openSUSE, Ubuntu, and Debian.
Leave a comment: