Announcement

Collapse
No announcement yet.

ZFS vs EXT4 - ZFS wins

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Software component versions

    Some relevant package/version bits:

    e2fsprogs-libs-1.41.12-14.el6.x86_64
    e2fsprogs-1.41.12-14.el6.x86_64
    lvm2-libs-2.02.98-9.el6.x86_64
    lvm2-2.02.98-9.el6.x86_64
    mdadm-3.2.5-4.el6.x86_64

    zfs-test-0.6.1-1.el6.x86_64
    dkms-2.2.0.3-2.zfs1.el6.noarch
    zfs-dkms-0.6.1-2.el6.noarch
    zfs-dracut-0.6.1-1.el6.x86_64
    zfs-release-1-2.el6.noarch
    zfs-0.6.1-1.el6.x86_64

    kernel-devel-2.6.32-358.6.1.el6.x86_64
    libreport-plugin-kerneloops-2.0.9-15.el6.x86_64
    kernel-2.6.32-358.2.1.el6.x86_64
    kernel-firmware-2.6.32-358.6.1.el6.noarch
    abrt-addon-kerneloops-2.0.8-15.el6.x86_64
    kernel-2.6.32-358.6.1.el6.x86_64
    kernel-headers-2.6.32-358.6.1.el6.x86_64
    dracut-kernel-004-303.el6.noarch
    kernel-2.6.32-358.el6.x86_64
    kernel-devel-2.6.32-358.2.1.el6.x86_64

    glibc-common-2.12-1.107.el6.x86_64
    glibc-2.12-1.107.el6.x86_64
    libcap-2.16-5.5.el6.x86_64
    libcap-ng-0.6.4-3.el6_0.1.x86_64
    libcollection-0.6.0-9.el6.x86_64
    glibc-headers-2.12-1.107.el6.x86_64
    glibc-static-2.12-1.107.el6.x86_64
    libcurl-7.19.7-36.el6_4.x86_64
    libcgroup-0.37-7.1.el6_4.x86_64
    glibc-devel-2.12-1.107.el6.x86_64

    Comment


    • #12
      First I would like to share that I'm quite intrigued by this results and I do thank you for your efforts.

      Right now I'm setting up an installation with three Intel SSD drives, because I think that it would be interesting to have the same benchmark with flash storage compared also. I'm intending to present the three drives through passtrough to a KVM installation of CentOS 6.4. The hardware is not as high-end, but will be nice experience to try anyway - Dualcore Phenom II with 12GB RAM for the KVM guest (16GB total) and SATA3. The drives are 2xIntel SSD 520 120GB plus one 180GB.

      Will write you soon with the results.

      Comment


      • #13
        The drives:



        And the results:

        OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles


        Maybe the difference will come with more disks or enterprise hardware and more time to optimise, unfortunately not all the drives will be in my possession in the next days. Also my desktop board chipset (990X) is heavily constrained in regard of troughput compared to even the low-end server (not micro, though) hardware and the CPU is really slow compared to the monster Xeon of the OP's server. The three SSD's are not with their latest firmware and AFAIK it would make a difference when discard/single drive is involved, but I guess the performance wouldn't be much different inside RaidZ or Raid5 arrays. In palipmsest the mdRaid5 device has some impressive figures:



        Regards, Todor

        Comment


        • #14
          Test on a modern system and then we might take this seriously. Right now, this benchmark is just as useless as Michael's one.

          Comment


          • #15
            apples pears

            Originally posted by RealNC View Post
            Test on a modern system and then we might take this seriously. Right now, this benchmark is just as useless as Michael's one.
            That's very unfair! I thought both benchmarks were very interesting. If you can do better, go ahead!
            However, using KVM, even with PCI passthrough, does mean that the comparison isn't apples-to-apples. Maybe using a set of 3 rotating drives on the same system would clarify things.
            I think the main message from these benchmarks is that ZFS scales better with a large number of drives, but EXT4 is faster with a few drives.
            Also, I am not comfortable with using LVM2 for raiding. The raid should be pure MD, and then if you need the extra functionality provided by LVM, it should sit on top of that, seeing the existing raid array as a single volume, else performance plummets.

            Comment


            • #16
              Originally posted by RealNC View Post
              Test on a modern system and then we might take this seriously. Right now, this benchmark is just as useless as Michael's one.
              That's a load of shit and you know it. You're not going to build a data center around the latest steaming turd Linus pinched off, you're going to build the data center around the security updates from Red Hat or Canonical's last LTS release.

              Comment


              • #17
                Re-doing the tests

                I have been re-running some of the tests. After the kernel updates, some of the tests give better results (in particular the "4000 Files, 32 Sub Dirs" is more affected than the other tests)

                The important thing is that this means my original benchmarks are not good for comparing for later tests.

                Right now I am just fine-tuning the list of tests. I will re-run all the tests once I have the faster disk. Still the overall conclusion doesn't change:

                1. ZFS performance on small numbers of disks is dismal. Looking at single-disk or even simple mirror disk results of ZFS makes it look bad unfairly.
                2. EXT4 on LVM on mdadm raid does not compete well with ZFS on large numbers of disks. The EXT4 stack is however ideal for simpler disk configurations.

                That answers my original testing objective. This raised some new questions:

                Question 1: How much of an impact does each of the layers in the EXT-on-LVM-on-MDADM stack have?
                1. I will do some tests on simple disk setups (single-disk, mirror) where I eliminate either LVM or mdadm.
                2. Putting EXT4 on a ZVol proved interesting, but because of my kernel update it isn't directly comparable. Essentially this replaces mdadm+LVM.

                Question 2: How does these results compare with real-world experiences?
                1. What happens when we start to actually use the extra features - Snapshots, create new volumes, create new mountpoints, etc? To answer this I would love to set up a test and benchmark using a "busy" system ... create and destroy snapshots while the tests are running. DIfficult to compare - New mdadm arrays need to be initialized, new Ext4 file systems must be formatted, etc. ZFS datasets are instantaneously usable/complete.
                2. What about a system that is under heavy CPU load? Would ZFS or EXT4 cope better in these situation? I am realy not sure how to test this? I imagine generating load by having a n-processes performing checksums or zip/un-zip or something like that to keep the CPUs at about 70% utilized. I would have to run this purely from RAM to avoid the CPU-load generation from impacting on the disk performance testing through disk controller bottlenecks.

                Besides that I am also adding a 10-disk Raid-6 vs a 10-disk Raid-Z2 comparisson.

                Comment


                • #18
                  Originally posted by feydun View Post
                  ...However, using KVM, even with PCI passthrough, does mean that the comparison isn't apples-to-apples. Maybe using a set of 3 rotating drives on the same system would clarify things.
                  I think the main message from these benchmarks is that ZFS scales better with a large number of drives, but EXT4 is faster with a few drives.
                  Also, I am not comfortable with using LVM2 for raiding. The raid should be pure MD, and then if you need the extra functionality provided by LVM, it should sit on top of that, seeing the existing raid array as a single volume, else performance plummets.
                  Actually my tests above are not fair at all, because if we have to compare ZFS with MD it should be under LVM2 to have the on-par snapshot functionality and the results would be much different for EXT4.

                  Originally posted by hartz View Post
                  ...Besides that I am also adding a 10-disk Raid-6 vs a 10-disk Raid-Z2 comparisson.
                  This is going to be really interesting, because this is really where ZFS filesystem shines - heavy RAID setups, especially with ARC/L2ARC.

                  Comment


                  • #19
                    More 9 and 10 disk combinations added

                    OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles

                    Comment


                    • #20
                      For me the biggest surprise here is that the 5-disk RaidZ outperforms the 10-disk RaidZ2.

                      Now I am going to have to test a pool with 2 x 5-disk RaidZ VDevs

                      Comment

                      Working...
                      X