What is wrong with benchmarks
A lot is wrong with this kind of article, if not merely with the way the testing is done. I posted in some considerable depth about this in December.
One of my main gripes is that even though subject X might gain more from performance tuning than subject Z, subject Z might be favored by the environment and performance/benchmarks articles are all too often used as a decision maker.
- Candidates: XFCE and Gnome
- Test Environment: Old Computer
- (Add performance test results in here)
- Conclusion: XFCE is faster, thus better than Gnome.
The PROBLEM is that people who use performance numbers to make decisions don't always realize (or choose to ignore) the features and any other benefits/advantages of the other subjects in the test.
I explained my gripe with this kind of article in much more detail in the post. In particular I ask numerous questions about the test environment used in the article about the performance of various file systems, and point out specific flaws in the testing and reporting.