Sorry, but you are only the 100000st person who believes that the test Phoronix had run with Ubuntu 7.04, 7.10 etc. were correct. There is enough prove that something went wrong during the tests (e.g. my P3-1000Mhz gets nearly the same numbers for Ubuntu 8.10 as the tested Core2Duo 1,87Ghz and my P-Mobile 1,7GHz gets nearly the same numbers for Ubuntu 8.10 as Ubuntu 7.04 in the Phoronix test, etc), so the numbers for the old tests can't be trusted.
Originally Posted by Takla
Now, as Ubuntu 8.10 and Fedora 10 are marked stable, this would be a good time to rerun the tests with Ubuntu and Fedora (7.x, 8.x, etc.) on the same hardware as the "Fedora 10 vs. Ubuntu 8.10"-test
If you would wait 20 days you can include openSUSE 11.1 in that too.
Originally Posted by glasen
Very nice, aways wanted to see some 64-bits benchmarks too, thanks for the review
OS vs. OS can use the defaults, sure, but comparing performance differences between library versions is even better so that users will know that they can get better performance by installing the newer or older libraries.
Originally Posted by deanjo
Compilation doesn't have to be required, that doesn't really have anything to do with this, that's what binary packages are for, but whatever. The point is, you should be able to pinpoint the cause of slowdowns to differences in the libraries hopefully, but if those are the same then you know the performance loss lies elsewhere obviously.
You're probably right though, most users probably don't care so much, though maybe if it was easier to install newer libraries and compilation was required less of the time, more users would, and IMO that's where the focus should be is on the actual programs that cause the differences in performance. If you don't direct the problems to where they actually are then they'll never get solved.
Consistency in testing?
I was looking through the recent set of tests (vs Mac, vs Fedora, vs OpenSolaris, etc) I'm surprised to see that there doesn't seem to be a consistent set of test results published. While you seem to use the same test suite the results that are shown are cherry picked.
Perhaps this is just the highlights that shows the interesting comparisons, but it would be good to publish links to entire set or results, otherwise there's the chance that an unfair comparison is being mode - showing only the favorable results.