Announcement

Collapse
No announcement yet.

Running The Native ZFS Linux Kernel Module, Plus Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Running The Native ZFS Linux Kernel Module, Plus Benchmarks

    Phoronix: Running The Native ZFS Linux Kernel Module, Plus Benchmarks

    In August we delivered the news that Linux was soon to receive a native ZFS Linux kernel module. The Sun (now Oracle) ZFS file-system has long been sought after for Linux, though less now since Btrfs has emerged, but incompatibilities between the CDDL and GPL licenses have barred such support from entering the mainline Linux kernel. There has been ZFS-FUSE to run the ZFS file-system in user-space, but it comes with slow performance. There has also been work by the Lawrence Livermore National Laboratories in porting ZFS to Linux as a native Linux kernel module. This LLNL ZFS work though is incomplete but still progressing due to a US Department of Energy contract. It is though via this work that developers in India at KQ Infotech have made working a Linux kernel module for ZFS. In this article are some new details on KQ Infotech's ZFS kernel module and our results from testing out the ZFS file-system on Linux.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It's nice to see that PTS 3.0 alpha is writing articles by its own . But this is ZFS, not a regular FS like ext4.

    What check tools KQ Infotech's provides?
    How is raid suport and scaling? linear scaling?
    snapshots?
    volume manegement?
    Forcing a corruption on raid, breaks the FS?

    Comment


    • #3
      What mount options is used for btrfs?
      I sthe new space_cache used in the benchmarks?

      Bye

      Comment


      • #4
        What about a benchmark that actually test ZFS for what it is intended for?

        As in the following anandtech article:

        Comment


        • #5
          the only 2.6.37 test that really showed Btrfs hurting was Postgresql-- http://www.phoronix.com/scan.php?ite...2637_ext4btrfs

          i'd love to see ZFS put through the same test. if the native ZFS port does Postgres decently, it may be enough to make some serious converts.

          Comment


          • #6
            Originally posted by mbouchar View Post
            What about a benchmark that actually test ZFS for what it is intended for?

            As in the following anandtech article:
            http://www.anandtech.com/show/3963/z...d-benchmarking
            Yeah, it's great. ZFS is for Enterprise servers - people dont get it. There are lot of complaints that you can not run ZFS on 128MB RAM machines or less RAM. And that ZFS is slow. And that...

            Who the heck cares if you can not run Enterprise stuff on 128MB RAM machines? Who the heck cares how slow ZFS is on a single drive, when it protects your data - whereas other filesystems might corrupt your data? etc.

            I would love to see Enterprise benchmarks, then ZFS would shine and the Linux solutions would fail miserably.

            1) Data safety. ZFS wins. Linux filesystems are bad on this aspect. Confirmed by computer scientists in research papers
            2) Speed. Use 48 drives or more, and ZFS will crush easily. I doubt even Linux can use that many drives to a full extent. I suspect Linux solutions will start to scale worse and worse, the more drives you add. Heck, recently ext4 architect bragged about ext4 is not crippled by 30ish(?) drives in an array any longer.
            3) LARGE raids. ZFS wins. Linux fails miserably.
            4) Ease of use. ZFS wins easily. Linux fails
            5) etc etc etc
            The list could go on. But there are no Enterprise benchmarks here on Phoronix. People see that ZFS is slow on one drive, and people draws the conclusion that ZFS is slow and can not give ridiculous performance.

            Comment


            • #7
              These test just reinforce one question in my mind, "Why aren't more people using XFS?". I switched to XFS from ReiserFS when it became deprecated. XFS is featureful, stable, and fast. Ext3 never really performed that well for me but for some reason it is the default in most distributions. Now people are going to move from Ext3 to Ext4 or Btrfs in the future but most will probably gain very little from it.

              Comment


              • #8
                Why JFS wasn't included in this benchmark? Is there something I'm missing?

                I've been using XFS and JFS for quite some time. I don't think I've ever had ext3/4.

                XFS in 2.6.37 looks promising, anyone knows why such a big difference and how JFS compares to that?

                cheers

                Comment


                • #9
                  Originally posted by Abraxas View Post
                  These test just reinforce one question in my mind, "Why aren't more people using XFS?".
                  I'm using it. I also noticed the very nice increase in it's performance with kernel 2.6.37.

                  Oh, and BTW:
                  We had also ran a similar subset of these tests on a standard 7200RPM Serial ATA 2.0 hard drive and proportionally these results didn't end up being different on an HDD over an SSD.
                  Thank you!

                  Comment


                  • #10
                    Originally posted by kebabbert View Post
                    2) Speed. Use 48 drives or more, and ZFS will crush easily. I doubt even Linux can use that many drives to a full extent. I suspect Linux solutions will start to scale worse and worse, the more drives you add. Heck, recently ext4 architect bragged about ext4 is not crippled by 30ish(?) drives in an array any longer.
                    3) LARGE raids. ZFS wins. Linux fails miserably.
                    You don't really know what you're talking about, right? *

                    - Gilboa
                    * Looks at the Linux server w/48 SAS drives in disbelief...
                    oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                    oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                    oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                    Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                    Comment

                    Working...
                    X