My big beef here is that you use a *fast* system for your tests. Now how about running each OS on a netbook, where the slower CPU, more limited memory, slower disk would all hightlight the differences better.
I've also wondered about how many times each test is run, and whether you collect meaningful statistics. I'd love to see the mean, median and std deviation for each test. I think in alot of cases, we'd see that the systems really are just tied, or have just a small improvement.
It would also be nice to see how reproduceable each sub-test really is, which would tell us alot about how useful each test really is.
Another tweak to show would be to run each test multiple times, but to drop and not drop the vm caches between tests, to see how well the VM and it's caching helps.
I do like these benchmarks, they're certainly improving over time, but they could be better. More data please!
All of that data is easily available and clear through the Phoronix Test Suite.
Originally Posted by l8gravely
I wonder why phoronix benchmarks always focus on the tiobench latency, and never shows the tiobench troughtput...tiobench outputs many data, and many of them are more interesting than latency.
As the article says in the ifrst page: "The x86_64 builds of both Fedora 11 and Ubuntu 9.04 were used."
Originally Posted by mendieta
Both use 2.6.29 kernel.
Originally Posted by SyXbiT
U uses a patched 2.6.28 kernel.
Most (casual) developers don't look to closely at the compiler that ships with the system, if it has gcc they start using. It is only in more formal environments where there is tool selection (or an individual developer is really focused on some metric like above.
Originally Posted by nathanvaneps
I have built *way* too many compilers myself (crosstool really rocks), but for most purposes, I don't bother looking to closely at the compiler for general ad-hoc development tasks.
Finally, remember that a lot of people _like_ to stay bleeding edge with end user functionality, like kernels, gnome, firefox, etc. They aren't focused on speed or size of the result, but they know they will be building on a regular basis. They too don't focus on selecting the right compiler, but instead grab what is easily available.
Other than the kernel related performance differences, Ubuntu kicks Fedora anal. Time to switch.
What about Mandriva ?
I regret there is nothing about Mandriva this time.
I'd love to see "Ubuntu vs Fedora vs Mandriva performance" ;-)
Maybe better give a gui rating not a performance rating ;) I prefer solutions executed by scripts - run it, then something works. Of course lots of distros provide extra guis for this and that. I did not try Mandriva nor Fedora lately, but from history usually Mandriva is maybe just behind SuSE's yast from gui tools. When you like the tools a distro provides additionally to its preconfig/stability that's usally the logical reason to choose it. Nobody would use Mac OS X because it is faster in a few benchmarks - same applies for any distro. I don't know of anybody who selects a distro because some apps run slightly faster.
As PTS compares mainly selfcompiled binaries it would not be that hard to bootstrap a newer compiler if needed. I never did that because the default compiler was slower or generated slower binaries. The only time i definitely had to compile even an older gcc (2.95) instead of using the default (some 2.96 prerelease) of old Red Hat 7.x systems was because the compiler was so broken that it was not able to generate binaries from standard source code, i still have got no idea how many changes red hat did to compile everything they shipped precompiled ;) Well i did not prefer to fix the code, i just added another compiler - all in my home only, so nothing could hurt the rest of the system.