A few problems with this benchmarking..
1. You're using Solaris 11 Express and not Solaris 11 GA or even a patched version of Solaris 11. A lot has changed since the Express version and even more since the GA release. You should be testing this on the latest and greatest, otherwise you might as well go with older Linux distros as well.
2. If you're going to use GCC, you can install a newer version from OpenCSW or sunfreeware. Otherwise, you should use the native Solaris Studio compiler and optimize for 64bit.
3. For your libraries, you should do an "apples to apples" comparison which means using the same versions. Again, you can either look on OpenCSW or sunfreeware or you can compile newer versions.
4. Lastly, what is the validity of the benchmarks? Things like Himeno, SciMark, etc. don't exactly simulate real workloads or use cases. Also when you do NAS tests.. is it with NFSv3, NFSv4? If the benchmarks were designed and optimized for Linux, it's not exactly a good comparison. There are plenty of generic benchmarks like filebench, vdbench, iperf, etc. that are more meaningful. If you have the $$, the java and web benchmarks from SPEC are not bad either.
In a real world environment, you aren't going to change the compiler to make it faster. I very much doubt the benchmarks are "optimized" for linux. Although, I do agree with having some web benchmarks. This is also kind of an old article.
Originally Posted by unixconsole
Yeah, but in the real world you'd use the recommended compiler for a given platform and gcc is not known for generating good code on non-Linux platforms. Solaris itself is not compiled with gcc either. And in if they were going to benchmark against a real BSD, gcc is not the recommended compiler either. Yeah, some web, java, and db benchmarks would be more realistic. The article says it was published on July 13, 2012...
Originally Posted by LinuxID10T