Hahah, I liked this article.
Something I've always wondered: if all these tweaks give extra performance and should be benchmarked instead of the defaults, then why are those tweaks not the defaults? Default values are (normally) set for a good reason, and while I do think that tweaks, multiple hardware configurations, etc, are required for a comprehensive analysis of a particular item, benchmarking alone requires some baseline, and that will be your default system.
When various tweaks prove to be suitable and stable, I'm sure they'll turn into default values.
Hahah, I liked this article.
You're forgetting the possibility that they don't know about other things.clearly most people are content with the defaults.
Seriously, in your benchmarking. People already know about optimizations and benchmarking. The most of your users are already quite experienced with it.
But the more casual user won't know all these things.
The last time I saw someone picking a computer he just looked at the keyboard.
He didn't even look at the memory or the CPU or GPU.
Can you believe that! When I told him that those things were important. He said "what's the difference?".
Please don't throw everybody on a They choose for this so they want it and everything about it. That's a big mistake.
Which means that any and every benchmark will always give a maximum of 60fps on a modern LCD display, using default settings. That is a sane default for most things -- and it removes tearing when playing video, for example -- but makes for an utterly pointless benchmark -- and makes the results less reliable, since most frame rates are cut to 20, 30 or 60 fps.
Which is why most benchmarks disable vsync in order to test the drivers' and the cards' ability to render complex scenes at their limit. And this is already not the default configuration, and introduces tearing. I don't see a fundamental difference between disabling vsync (causes tearing, improves performance) and disabling SwapBuffersWait (causes tearing, improves performance).
I still think testing defaults is ok though, as if nothing else it provides a baseline, but mention of what can skew results (and why it is/isn't done) is definitely useful.
As for default config as a baseline, I'm all for it. I have nothing against testing the default. But I personally think that Ubuntu has made some bad decisions -- like using Compiz, which cannot suspend compositing on the fly AFAIK -- which harms performance on free drivers disproportionately.
KDE users do not have this issue.
It's unfair if poor defaults (in this case, unrelated to the driver) make the driver look bad.
A lot is wrong with this kind of article, if not merely with the way the testing is done. I posted in some considerable depth about this in December.
One of my main gripes is that even though subject X might gain more from performance tuning than subject Z, subject Z might be favored by the environment and performance/benchmarks articles are all too often used as a decision maker.
- Candidates: XFCE and Gnome
- Test Environment: Old Computer
- (Add performance test results in here)
- Conclusion: XFCE is faster, thus better than Gnome.
The PROBLEM is that people who use performance numbers to make decisions don't always realize (or choose to ignore) the features and any other benefits/advantages of the other subjects in the test.
I explained my gripe with this kind of article in much more detail in the post. In particular I ask numerous questions about the test environment used in the article about the performance of various file systems, and point out specific flaws in the testing and reporting.
you have a tough job.
maybe you just need to be clear why you are doing the benchmarking. that would then influence the options. this site does lots of different sorts of benchmarking for different reasons.
if you are benchmarking to see which distro/OS is fastest, then defaults might be best.
if you want to compare 2 drivers for the same hardware, then you need to make sure the drivers are doing the same task.
if you want to compare several gcc releases then a range of options is good (e.g. does -lto make a difference).
comparing 2 different compiles, i'd say find the fastest option that still makes a functioning program.
for graphics drivers more than FPS than the screen refresh is meaning less and useless for a user. it is fast enough. maybe you need to find more demanding settings for the benchmark.