Announcement

Collapse
No announcement yet.

Can DragonFly's HAMMER Compete With Btrfs, ZFS?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    And that linux kernel unpacking test it totally biased. Of course Linux is more optimized to unpack the linux kernel than bsd is, dough. Doing it on bsd is comparing oranges to apples.

    Comment


    • #22
      Originally posted by misiu_mp View Post
      And that linux kernel unpacking test it totally biased. Of course Linux is more optimized to unpack the linux kernel than bsd is, dough. Doing it on bsd is comparing oranges to apples.
      I can't tell if you're joking or not...

      Comment


      • #23
        Then I have succeeded.

        Comment


        • #24
          Originally posted by misiu_mp View Post
          Then I have succeeded.
          I read your comment history and figured it out, but it was too late to edit my post. Good job, by the way.

          Comment


          • #25
            Trimming for brevity (If you feel I've misrepresented comments, please advise).

            Originally posted by baryluk View Post
            Only reproductible way to perform good benchmark is to trace all filesystem events
            In general that is not needed. Yes to have consistent and absolutely reliable results. Further, it does become extremely useful if there are filesystem race conditions that you need to track down as well.


            ...

            Simple microbenchmarks are ... they are only usefull for improving code, not really comparing multiple different filesystems.
            There are at least two fundamental classes of users, generalists and people carrying custom and targeted loads. If you understand your workload, micro-benchmarks can serve as a coarse guide for your system. For example, if your load values integrity, the fsync performance paired with journaling behavior gives you an indication of which filesystems to use and which ones to avoid.


            ... If one will perform benchmark in subfolder of filesystem and the delete files after it, it is highly probably that end result will be far off the begining condition, so one cannot actually perform benchmark again. It is also hard to be reproduced by other person on other box.
            ...
            We are not talking a couple of percent in these cases, we are talking about multiples or orders of magnitude in a lot of cases. Improving the absolute repeatability shouldn't lead to large changes.

            Comment


            • #26
              Hi, I assume you are Matt Dillon - the leader of the DragonFly BSD team. You're the domain expert in that platform.

              In general, Michael is fairly religious in ensuring that he does a default install - allowing the decisions that are codified into the system - the decisions made by the developers on behalf of non-expert users.

              Now, I believe that Michael would be willing to look to reconfigure a DragonFly BSD system to your specifications and re-run the same benchmarks again. Hey, even use your choice of operating system. Are you happy to do that? The only entry criteria is the tuning guide is hosted and publicly accessible for others.

              Further, if there are any extra tests or benchmark that you like to see, I doubt there would be any problems to running those tests - or to add them to PTS.

              Feel free to PM me, email me & michael (matthew at phoronix.com & michael at phoronix.com) or follow up on this thread.

              Originally posted by dillon View Post
              There are numerous other issues... whether the system was set to AHCI mode or not (DragonFly's AHCI driver is far better than its ATA driver). Whether the OS was tuned for benchmarking or for real-world activities w/ regards to how much memory the OS is willing to dedicate to filesystem caches. How often the OS feels it should sync the filesystem. Filesystem characteristics such as de-dup and compression and history. fsync handling. Safety considerations (how much backlog the filesystem or OS caches before it starts trying to flush to the media... more is not necessarily better in a production environment), characteristics in real load situations which require system memory for things other than caching filesystem data. And I could go on.

              In short, these benchmarks are fairly worthless.

              Now HAMMER does have issues, but DragonFly also has solutions for those issues. In a real system where performance matters you are going to have secondary storage, such as a small SSD, and in DragonFly setting a SSD up with its swapcache to cache filesystem meta-data to go along side the slower 'normal' 1-3TB HD(s) is kinda what HAMMER is tuned for. Filesystem performance testing on a laptop is a bit of an oxymoron since 99.999% of what you will be doing normally will be cached in memory anyway and the filesystem will be irrelevant. But on the whole our users like HAMMER because it operates optimally for most workloads and being able to access live snapshots of everything going back in time however long you want to go (based on storage use verses how much storage you have) is actually rather important. Near real time mirroring streams to onsite and/or offsite backups not to mention being able to run multiple mirroring streams in parallel with very low overhead is also highly desireable. It takes a little tuning (e.g. there is no reason to keep long histories for /usr/obj or /tmp), but it's easy.

              -Matt

              Comment


              • #27
                Umm, why?

                Originally posted by mtippett View Post
                In general, Michael is fairly religious in ensuring that he does a default install - allowing the decisions that are codified into the system - the decisions made by the developers on behalf of non-expert users.
                We can infer from this that Michael is fairly religious in creating worthless benchmarks that are clearly biased toward systems that are tuned to be fast-and-dangerous by default.

                Also, please provide tuning information with the benchmarks so that people can make suggestions for future improvements, which many of your readers would be more than happy to do.

                Comment


                • #28
                  A fifth is the compiler, which is obvious in the gzip tests (which are cpu bound, NOT filesystem bound in any way).
                  Totally agree, but it is quite interesting to see that that particular test differed 22% between best and worst on BSD using the same compiler and CPU.

                  Now if this is due to some filesystem saturating the CPU this is still something that affects filesystem performance, at least on that particular hardware.

                  Comment


                  • #29
                    Originally posted by thesjg View Post
                    @ciplogic -- what you seem to be failing to understand is that the ZFS random write results aren't actually possible, there is something else going on. Without an explanation as to what else is going on or WHY they are possible, all of the results published in this article are rubbish. 100% meaningless. So sure, examining why errant results occur might not be his job, but if that's the case, and he lets the article exist as-is, he will be disseminating gross misinformation.

                    The credibility of phoronix is pretty poor already, I suspect they will simply let this be another nail in the coffin.
                    no, it EASY what is going on. The same thing a lot of people observed in the past:
                    ZFS cheats. It caches a lot, even when told not to, and flushes later. But returns immediately. If you run out of cache, ZFS will thrash your disk for ages, but until that point, ZFS benchmarking will report amazing numbers.

                    I am surprised how quiet the d-bse people are. Always claiming how fast their HAMMER is and how well it scales. Well, if the kernel can't even do SMP I have my doubts about scalability.

                    Comment


                    • #30
                      I think giving these benchmarks another go against our recent DragonFly BSD 2.10 release is probably warranted.

                      Comment

                      Working...
                      X