And that linux kernel unpacking test it totally biased. Of course Linux is more optimized to unpack the linux kernel than bsd is, dough. Doing it on bsd is comparing oranges to apples.
Then I have succeeded.
Trimming for brevity (If you feel I've misrepresented comments, please advise).
There are at least two fundamental classes of users, generalists and people carrying custom and targeted loads. If you understand your workload, micro-benchmarks can serve as a coarse guide for your system. For example, if your load values integrity, the fsync performance paired with journaling behavior gives you an indication of which filesystems to use and which ones to avoid....
Simple microbenchmarks are ... they are only usefull for improving code, not really comparing multiple different filesystems.
We are not talking a couple of percent in these cases, we are talking about multiples or orders of magnitude in a lot of cases. Improving the absolute repeatability shouldn't lead to large changes.
... If one will perform benchmark in subfolder of filesystem and the delete files after it, it is highly probably that end result will be far off the begining condition, so one cannot actually perform benchmark again. It is also hard to be reproduced by other person on other box.
Hi, I assume you are Matt Dillon - the leader of the DragonFly BSD team. You're the domain expert in that platform.
In general, Michael is fairly religious in ensuring that he does a default install - allowing the decisions that are codified into the system - the decisions made by the developers on behalf of non-expert users.
Now, I believe that Michael would be willing to look to reconfigure a DragonFly BSD system to your specifications and re-run the same benchmarks again. Hey, even use your choice of operating system. Are you happy to do that? The only entry criteria is the tuning guide is hosted and publicly accessible for others.
Further, if there are any extra tests or benchmark that you like to see, I doubt there would be any problems to running those tests - or to add them to PTS.
Feel free to PM me, email me & michael (matthew at phoronix.com & michael at phoronix.com) or follow up on this thread.
Also, please provide tuning information with the benchmarks so that people can make suggestions for future improvements, which many of your readers would be more than happy to do.
Totally agree, but it is quite interesting to see that that particular test differed 22% between best and worst on BSD using the same compiler and CPU.A fifth is the compiler, which is obvious in the gzip tests (which are cpu bound, NOT filesystem bound in any way).
Now if this is due to some filesystem saturating the CPU this is still something that affects filesystem performance, at least on that particular hardware.
ZFS cheats. It caches a lot, even when told not to, and flushes later. But returns immediately. If you run out of cache, ZFS will thrash your disk for ages, but until that point, ZFS benchmarking will report amazing numbers.
I am surprised how quiet the d-bse people are. Always claiming how fast their HAMMER is and how well it scales. Well, if the kernel can't even do SMP I have my doubts about scalability.
I think giving these benchmarks another go against our recent DragonFly BSD 2.10 release is probably warranted.