Announcement

Collapse
No announcement yet.

Benchmarks Of The New ZFS On Linux: EXT4 Wins

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Agreed. I personally run both ext4 and btrfs these days, and don't really consider using them for the same tasks. For what I do:

    Ext4 for single disk machines that are not running bleeding edge software. It wins for speed, plus has proven to be reliable.
    Ext4 over MDADM for large RAID6 arrays. It's not file system aware RAID, but you can't beat MDADM for flexibility and reliability.
    BTRFS for small arrays (4 or less disks) where BTRFS's current RAID1 implementation is sufficient. The ability to add and remove disks on the fly, use differently sized disks, and expand your volumes by adding additional disks, is something that not even ZFS can match. (Yes yes, use with caution).
    BTRFS for my home machine (root partition on an SSD) and my workstation at work, which I have a habbit of running too bleading edge software on. The ability to make snapshots of your / and roll back on the fly is something that is amazingly convenient. Especially with tools like apt-btrfs-snapshot.

    Anyway, just my personal usage. I don't get into ZFS much on Linux as it still has the feel of a "foreign entity" (if that makes sense), even with ZOL. Currently RAID5/6 is the only feature I still envy from ZFS, and that's finally starting to be released.

    Comment


    • #32
      Michael I would like to point you
      Here: http://scientificlinuxforum.org/inde...showtopic=1634
      and
      Here: http://listserv.fnal.gov/scripts/wa....ge.com&P=21882

      Proper hardware, configuration and use case of ZOL.

      - Crash0veride

      Comment


      • #33
        Michael I would like to point you
        Here
        and
        Here

        Proper use case and HW/OS configuration for ZOL. Please consider this in future benchmarks.
        I am not sure I know many people who would use ZOL as the file system of choice on their laptop, nor is that the intended use case of ZOL.

        - Crash0veride

        Comment


        • #34
          Originally posted by AndrewDB View Post
          I was going to write an email to Michael Larabel, since his article title and even the article as a whole are misleading.
          As others have noted in this thread before, the first big mistake here is a clear case of comparing apples and oranges. ZFS and EXT4 are completely different filesystems, yes, both allow the reading/writing of files, but the similarity stops there!

          ZFS is a great piece of software and probably the best filesystem for 24/7 servers, for many reasons, none of which are discussed in the article.
          EXT4 is an evolution of EXT3 which itself is an evolution of EXT2, which itself is an evolution of extfs, which itself is an evolution of the Minix filesystem written by Andrew Tanenbaum something like 30 years ago.

          But really the big question here is: what is the author trying to demonstrate? What exactly does "Winning" mean here? Imnsho, nothing.
          If one really is looking for the "fastest? filesystem for Linux then look no further than tmpfs. Which just goes to show that "fastest" is also completely meaningless, unless it is qualified.

          Michael, you can do much better than that!
          Very strange title indeed. Makes me wonder if i was actually right when i jokingly asked if Michael had arguments with the ZoL developers or hates the license or something when i asked for inclusion of zfs in fs tests. Seems like an attempt to scare people away from it...
          Well i'm using it on multiple server just for files, no root or sql servers, and i get max disk speeds during transfers on all of them and love the features. So if you are looking for a simple (big) file server, don't be scared by this article!

          Comment


          • #35
            since that was on a single disk, and not using a multi-disk raidz vs ext4 on raid5 setup:

            completely und utter shite.

            As expected from moronix.

            just like back when moronix compared filesystems in default configuration, ext3 turned off barriers and everybody else turned them on. ext3 was the winner!

            No, moron, ext3 was broken.

            Comment


            • #36
              Reliability

              ZFS is well understood conceptually, but concepts don't write themselves to disk. The Linux implementation of ZFS is not nearly as well-tested as the Solaris / Illumos implementations of ZFS. I think it is a tremendous mistake to argue about ZFS's reliability in this particular implementation. I can't imagine admins responsible for valuable data being more comfortable storing it using ZOL than ext4.

              For the record, I have nothing against ZFS politically. I like having choices in the open source world, and I think the Linux ecosystem would be much richer if ZFS was as available here as it has been on Solaris / Illumos.

              Comment


              • #37
                Comparing EXT4 backends would be more informative

                I would preffer to see benchmarks of EXT4 in this manner:

                EXT4 on raw
                EXT4 on mdadm
                EXT4 on lvm on mdadm
                EXT4 on zfs block device.

                The main attraction of zfs to me is its fast snapshotting capability and its zfs-send for backups. I would love to have the speed of ext4 on a fast snapshotting block device that didn't have the lvm2 snapshot limitations of limited snapshots or COW performance hit.

                Comment


                • #38
                  Originally posted by Serge View Post
                  ZFS is well understood conceptually, but concepts don't write themselves to disk. The Linux implementation of ZFS is not nearly as well-tested as the Solaris / Illumos implementations of ZFS. I think it is a tremendous mistake to argue about ZFS's reliability in this particular implementation. I can't imagine admins responsible for valuable data being more comfortable storing it using ZOL than ext4.

                  ...
                  You are wrong. ZOL 0.6.1 is actually quite reliable and production-ready. It's just not optimized for performance yet, since utmost performance (i.e. "getting that last 3% extra speed") has low priority for ZFS storage admins. Not to mention the fact that EXT4 has had numerous bugs reported over the last couple of years, some of them causing data corruption.

                  Note:  This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...


                  I am a sysadmin with 20 years of experience and I feel much more comfortable storing valuable data using ZFS on Linux than using ext4.

                  Comment


                  • #39
                    My Dbench Results

                    I'll preface this by saying I'm a happy production system zfs on linux user and a ( very small ) contributor to the project. ZOL isn't perfect, but many of us use it day to day in production environments with great success. A simple, 1 disk/1 user benchmark doesn't do zol justice. In fact, I think something must have been way off on this article's dbench results.

                    I wanted to compare the enterprise performance of zfs vs ext4 on top of mdadm. For time constraints, simplicity, and a comparison to this articles final graph, I only used dbench. The dbench command was always "dbench -t 60 x" where x was 1,20, and 50 simulated clients. All mount options and zfs pool params were left at default values. No l2arc was used. All mdadm / zpool create commands were very basic--no fancy raid offset settings or block size tunings, etc.

                    The goal was to compare zfs to ext4/md under raid0 / raid1 / and raid5 setups with 4 drives. It would have been nice to use all internal drives, but the USB 3 interfaces work fairly well and shouldn't get maxed out with 4 drives.

                    Hardware setup used for all tests:

                    24 GB Machine with i7-3770 processor. Debian Wheezy
                    4 1TB drives all driven by USB 3.0. (2) WD Passport 1TB drives plugged into a USB 3.0 hub. The hub is then plugged into the MB. (1) WD Black 1TB cavier and (1) WD Blue 1TB . The SATA drives are using a USB 3.0 convertor. One is plugged into the same hub as the (2) Passport drives. The other SATA convertor is plugged into the MB.

                    The zfs modules options were : options zfs zfs_arc_max=4147483648 l2arc_headroom=8 zvol_major=240

                    All mdadm arrays were created clean, so there were no background syncs or other disk io or cpu load on the system. All zfs pools were default values--no compression, block size changes, dedup. l2arc, etc were used.

                    I will only show the final dbench line output. In addition, for each test, I switched between the zfs tests and ext4/md test for each of the 3 configurations.

                    RAID 1 ( 4 Way Mirror ) disk setup:

                    zfstest ONLINE 0 0 0
                    mirror-0 ONLINE 0 0 0
                    usb-WD_My_Passport_0740_575844314138315939333631-0:0 ONLINE 0 0 0
                    usb-WD_My_Passport_0740_575833314535315233393632-0:0 ONLINE 0 0 0
                    ata-WDC_WD1001FALS-00J7B0_WD-WMATV8259309 ONLINE 0 0 0
                    ata-WDC_WD10EALS-00Z8A0_WD-WCATR0556065 ONLINE 0 0 0

                    md127 : active raid1 sdh[3] sdg[2] sdj[1] sdi[0]
                    976597824 blocks super 1.2 [4/4] [UUUU]

                    Raid 1 Results:
                    1. 1 Client
                      zfs: Throughput 50.7339 MB/sec 1 clients 1 procs max_latency=589.779 ms
                      ext4/md: Throughput 105.034 MB/sec 1 clients 1 procs max_latency=316.502 ms
                    2. 20 Clients
                      zfs: Throughput 97.2308 MB/sec 20 clients 20 procs max_latency=12746.879 ms
                      ext4/md: Throughput 45.6418 MB/sec 20 clients 20 procs max_latency=921.139 ms
                    3. 50 Clients
                      zfs: Throughput 32.5826 MB/sec 50 clients 50 procs max_latency=20144.122 ms
                      ext4/md: Throughput 13.2817 MB/sec 50 clients 50 procs max_latency=3276.940 ms


                    RAID 0 ( Big Disk ) disk setup:

                    zfstest ONLINE 0 0 0
                    usb-WD_My_Passport_0740_575844314138315939333631-0:0 ONLINE 0 0 0
                    usb-WD_My_Passport_0740_575833314535315233393632-0:0 ONLINE 0 0 0
                    ata-WDC_WD1001FALS-00J7B0_WD-WMATV8259309 ONLINE 0 0 0
                    ata-WDC_WD10EALS-00Z8A0_WD-WCATR0556065 ONLINE 0 0 0

                    md127 : active raid0 sdh[3] sdg[2] sdj[1] sdi[0]
                    3906981888 blocks super 1.2 512k chunks

                    Raid 0 Results:
                    1. 1 Client
                      zfs: Throughput 243.524 MB/sec 1 clients 1 procs max_latency=139.672 ms
                      ext4/md: Throughput 139.253 MB/sec 1 clients 1 procs max_latency=84.361 ms
                    2. 20 Clients
                      zfs: Throughput 730.531 MB/sec 20 clients 20 procs max_latency=758.726 ms
                      ext4/md: Throughput 238.648 MB/sec 20 clients 20 procs max_latency=999.028 ms
                    3. 50 Clients
                      zfs: Throughput 636.012 MB/sec 50 clients 50 procs max_latency=1061.083 ms
                      ext4/md: Throughput 269.614 MB/sec 50 clients 50 procs max_latency=870.887 ms


                    RAID 5 / RAIDZ) disk setup:

                    NAME STATE READ WRITE CKSUM
                    zfstest ONLINE 0 0 0
                    raidz1-0 ONLINE 0 0 0
                    usb-WD_My_Passport_0740_575844314138315939333631-0:0 ONLINE 0 0 0
                    usb-WD_My_Passport_0740_575833314535315233393632-0:0 ONLINE 0 0 0
                    ata-WDC_WD1001FALS-00J7B0_WD-WMATV8259309 ONLINE 0 0 0
                    ata-WDC_WD10EALS-00Z8A0_WD-WCATR0556065 ONLINE 0 0 0

                    md127 : active raid5 sdh[3] sdg[2] sdj[1] sdi[0]
                    2929792512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

                    Raid 5 / RAIDZ Results:
                    1. 1 Client
                      zfs: Throughput 91.8982 MB/sec 1 clients 1 procs max_latency=99.314 ms
                      ext4/md: Throughput 36.9698 MB/sec 1 clients 1 procs max_latency=532.121 ms
                    2. 20 Clients
                      zfs: Throughput 353.076 MB/sec 20 clients 20 procs max_latency=478.816 ms
                      ext4/md: Throughput 70.2872 MB/sec 20 clients 20 procs max_latency=3915.164 ms
                    3. 50 Clients
                      zfs: Throughput 305.933 MB/sec 50 clients 50 procs max_latency=1192.350 ms
                      ext4/md: Throughput 75.9529 MB/sec 50 clients 50 procs max_latency=4370.479 ms

                    Comment


                    • #40
                      Originally posted by crash0veride View Post
                      Michael I would like to point you
                      Here: http://scientificlinuxforum.org/inde...showtopic=1634
                      and
                      Here: http://listserv.fnal.gov/scripts/wa....ge.com&P=21882

                      Proper hardware, configuration and use case of ZOL.

                      - Crash0veride
                      nice !



                      @mgmartin:

                      thanks for posting those !


                      this clearly shows how ZFS shines in its destined environment



                      Originally posted by ryao View Post
                      You should use zfs send/recv for backups. It will outperform rsync on ext4, especially when doing incremental backup.
                      a neat feature I also wasn't aware of, the more ZFS volumes or "pool"s I use

                      the more usefulness this feature gains for me

                      thanks !

                      Comment

                      Working...
                      X