Announcement

Collapse
No announcement yet.

Can DragonFly's HAMMER Compete With Btrfs, ZFS?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Can DragonFly's HAMMER Compete With Btrfs, ZFS?

    Phoronix: Can DragonFly's HAMMER Compete With Btrfs, ZFS?

    The most common Linux file-systems we talk about at Phoronix are of course Btrfs and EXT4 while the ZFS file-system, which is available on Linux as a FUSE (user-space) module or via a recent kernel module port, gets mentioned a fair amount too. When it comes to the FreeBSD and PC-BSD operating systems, ZFS is looked upon as the superior, next-generation option that is available to BSD users. However, with the DragonFlyBSD operating system there is another option: HAMMER. In this article we are seeing how the performance of this original creation within the DragonFlyBSD project competes with ZFS, UFS, EXT3, EXT4, and Btrfs.

    http://www.phoronix.com/vr.php?view=15605

  • #2
    Some reactions on FBSD performance mailing list

    http://lists.freebsd.org/pipermail/f...ry/004137.html

    Comment


    • #3
      There is something seriously wrong with the Threaded I/O tester results. There is simply no way possible that the ZFS writes got faster when switching from linear to random writes, especially considering that all but HAMMER and btrfs went down by an order of magnitude. Are you sure the result wasn't supposed to be 4.96?

      I feel that this makes all of your results quite suspect.

      Comment


      • #4
        Originally posted by thesjg View Post
        I feel that this makes all of your results quite suspect.
        Also, the blogbench results indicate that you ran the benchmark and seperated the read and write results to create the independent graphs. You should point this out in your article. For heavy concurrent read/write workloads where read performance is important DragonFly would recommend the use of the fairq disk scheduler.

        Comment


        • #5
          Originally posted by thesjg View Post
          Are you sure the result wasn't supposed to be 4.96?

          I feel that this makes all of your results quite suspect.
          Everything is automated and reproducible from test installation to graph generation.
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Originally posted by Michael View Post
            Everything is automated and reproducible from test installation to graph generation.
            That's cute, but the results don't jive. Unless you can explain why ZFS was miraculously faster when all of the other file systems were slower I have to assume there is something wrong with your test.

            Comment


            • #7
              Originally posted by thesjg View Post
              There is something seriously wrong with the Threaded I/O tester results. There is simply no way possible that the ZFS writes got faster when switching from linear to random writes, especially considering that all but HAMMER and btrfs went down by an order of magnitude. Are you sure the result wasn't supposed to be 4.96?

              I feel that this makes all of your results quite suspect.

              It is of course possible. You just need to know who ZFS works. And know for what kind of workloads it was designed for.

              Comment


              • #8
                It is quite easy to explain (log structured writes, more scalable techniques, and probably some bugs in btrfs), but I just do not have time

                PS. Stupid 1 minute limit.

                Comment


                • #9
                  Originally posted by baryluk View Post
                  It is quite easy to explain (log structured writes, more scalable techniques, and probably some bugs in btrfs), but I just do not have time

                  PS. Stupid 1 minute limit.
                  Fortunately, I am familiar with the internals of UFS, ZFS and HAMMER. HAMMER is "log structured" in the same fashion as ZFS. Being log structured does nothing to explain the -increase- in performance seen between the final two graphs. These benchmarks are absolutely useless unless the inconsistencies can be explained.

                  Comment


                  • #10
                    Hmm, @thesjg you are right. Last graph alone (random write) could be valid, but comparing it to the previous (continious write) indeed bring some questions. Maybe data was not written to disk and combined in cache before doing sync. One should use LOT bigger files than avalable RAM (but unfortunetly random write will take lots of time), and good random generator. Needs more investigation.

                    Comment


                    • #11
                      This machine have 4GB, and ZFS have very agressive caching, probably more than other file systems. I do no think the random write test was done with such big files, as it will ages to complete in really random write way. Having for example few-hour Postgresql benchmark on 40 GB database would be probably better indicator.

                      But result of 49 MB/s is still slightly below hardware possibility of this disk as I see (about 55MB/s). Having 100MB/s would be more obviously wrong (indicating that some blocks are written multiple times probably).

                      Other possibility is that compression was enabled, but it would also boost continious write results. So can be excluded from possible causes.

                      Comment


                      • #12
                        @baryluk I think Michael did it right. Anomalies will always happen in computer science depending on caching and so on. I think is not Michael's duty to look in sourcecode (or whatever which is the way) to find which and where are the problems of performance regressions.
                        For example Ext4 is known with *defaults* to be slower (by some visible percents) but on long run as it have live defragmentation and other features, after years when fragmentation will payoff, you will see that a one year old Ext4 machine may be faster than an Ext3 one.
                        So benchmarks in general are limited and I think that Michael does a wonderful job to promote Linux, BSDs in general and a good caching implementation that happen in one FS that does not reproduce to all other implementations may be just a signal to report as bug to other FS implementers, not to shoot the messenger.

                        Comment


                        • #13
                          @ciplogic -- what you seem to be failing to understand is that the ZFS random write results aren't actually possible, there is something else going on. Without an explanation as to what else is going on or WHY they are possible, all of the results published in this article are rubbish. 100% meaningless. So sure, examining why errant results occur might not be his job, but if that's the case, and he lets the article exist as-is, he will be disseminating gross misinformation.

                          The credibility of phoronix is pretty poor already, I suspect they will simply let this be another nail in the coffin.

                          Comment


                          • #14
                            Originally posted by thesjg View Post
                            @ciplogic -- what you seem to be failing to understand is that the ZFS random write results aren't actually possible, there is something else going on. Without an explanation as to what else is going on or WHY they are possible, all of the results published in this article are rubbish. 100% meaningless. So sure, examining why errant results occur might not be his job, but if that's the case, and he lets the article exist as-is, he will be disseminating gross misinformation.

                            The credibility of phoronix is pretty poor already, I suspect they will simply let this be another nail in the coffin.
                            Bencchmarking of any benchmark excluding your applications that you run on, is meaningless.
                            As far as for me it appears is just a better caching behavior. As this benchmark will not likely make OS to flush it's cache, may happen that things get "too fast". The issue is: if your application will use the same usage pattern will it fly as fast?
                            My point was that anomalies always appear in benchmarking, also as disk is two orders of magnitude slower than memory, and disk access even much more times, I think that is not a fault of Phoronix suite. Michael in all that can do is to run them and to see if are not problems statistically (which is a feature of PTS).

                            Comment


                            • #15
                              Accuracy issues aside, I had never heard of HAMMER filesystem. If this is a new effort that has only a few developers, then congratulations are in order -- this filesystem is a significant improvement over what you already have on BSD, and in some cases it can also remain competitive with the big three Linux filesystems. It's always nice to have another option in the open source world. If I find myself using BSD for some reason, I might check this out.

                              Comment

                              Working...
                              X