Announcement

Collapse
No announcement yet.

ZFS On Linux Is Now Set For "Wide Scale Deployment"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ZFS On Linux Is Now Set For "Wide Scale Deployment"

    Phoronix: ZFS On Linux Is Now Set For "Wide Scale Deployment"

    The Sun/Oracle ZFS file-system port to the Linux kernel has now been deemed ready with its new release as "ready for wide scale deployment on everything from desktops to super computers." Will you use ZFS On Linux?..

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Will I use ZFS on Linux?

    I already do... I'm attempting to use it to recover a corrupted 3-drive ZFS Raid pool that my NAS ate while I was swapping a drive out (and attempting to re-size the pool at the same time).

    That being said, I broke the pool out of my own stupidity right after I had neglected to take a fresh backup due to impatience... So either I re-write the parts of ZFS that handle the drive labels to remove the checksum verification and convince it that the missing drive is just off-line, or I lose about 10 years of digital photos...

    When I've just left the ZFS array on its own, it has performed wonderfully and reliably in the 3-drive setup in my freenas box. The on-line scrubbing/verification and end-to-end checksums are re-assuring, as well as the fault tolerance for a single-drive failure. Backups are still required anyway, but it's reassuring to know that if a drive dies I have time to find a spare and swap it in without having to scramble.

    Current uptime is only ~90 days, but that's due to some power outages at the beginning of winter.

    Comment


    • #3
      why don't you, for this job, just use freebsd or even better- solaris? solaris has the best/latest support for zfs

      Comment


      • #4
        Nope, not going to use it. Btrfs does everything I need already, so why should I even bother with ZFS? Especially since it's under the CDDL.

        Comment


        • #5
          Originally posted by garegin View Post
          why don't you, for this job, just use freebsd or even better- solaris? solaris has the best/latest support for zfs
          Last time I tried using it for production was the KQ Infotech port on RHEL6, but that wasn't stable under high IO load (BackupPC Server).
          Since, I have also tried the LLN port to access data on zfs pools for recovery with success, but due to the nature of the usage cannot generalize a recommendation.

          If you do new ZFS benchmarks Michael, please do not just do single disk / SSD benchmarks. These are pointless. A comparison of a multidisk mdadm raid5/6 vs a ZFS raidz /raidz2 would be very interesting though. Added information value could be gained by testing how much a ZIL log or cache device SSD improves speeds.
          Otherwise thanks for the greate site! :-)

          Comment


          • #6
            Originally posted by Michael
            New benchmarks of ZFS On Linux compared to other Linux file-systems will likely come soon. The last time at Phoronix we did extensive ZFS Linux benchmarks was last summer with ZFS On Linux With Ubuntu 12.04 LTS.
            Please test ZFS pools created with different ashift values when you do these benchmarks. The default ashift is hardware-dependent and will be wrong if your hardware lies. You can find out what the default ashift for your hardware is after you make the pool by running `zdb`. If your hardware lies, it will likely be ashift=9. It should be ashift=12 on advanced format disks, which are basically hard disks manufactured after 2009. It should be ashift=13 on SSDs manufactured in roughly the same time frame. If you do not do this, your benchmarks will be invalid.

            Originally posted by Veerappan View Post
            Will I use ZFS on Linux?

            I already do... I'm attempting to use it to recover a corrupted 3-drive ZFS Raid pool that my NAS ate while I was swapping a drive out (and attempting to re-size the pool at the same time).

            That being said, I broke the pool out of my own stupidity right after I had neglected to take a fresh backup due to impatience... So either I re-write the parts of ZFS that handle the drive labels to remove the checksum verification and convince it that the missing drive is just off-line, or I lose about 10 years of digital photos...

            When I've just left the ZFS array on its own, it has performed wonderfully and reliably in the 3-drive setup in my freenas box. The on-line scrubbing/verification and end-to-end checksums are re-assuring, as well as the fault tolerance for a single-drive failure. Backups are still required anyway, but it's reassuring to know that if a drive dies I have time to find a spare and swap it in without having to scramble.

            Current uptime is only ~90 days, but that's due to some power outages at the beginning of winter.
            You should join #zfs on freenode. The community should be able to help you with recovery.

            Originally posted by garegin View Post
            why don't you, for this job, just use freebsd or even better- solaris? solaris has the best/latest support for zfs
            My understanding is that paid support is important to LLNL, which uses the lustre filesystem on top of ZFS. With Linux, they have paid support from both Whamcloud and Redhat. If they switched to FreeBSD, they would need to port lustre and then they would likely need to support themselves. If they switched Solaris (or Illumos), they could get support for the base system from a vendor, but they would still be on their own for lustre support. On the other hand, Whamcloud has significant interest in ZFS as a replacement for their ext4-based ldiskfs. This means that they can get support for Lustre from Whamcloud when using ZFS as a backend for lustre on Linux.

            I should note that I am not associated with LLNL. My statements here should be taken as those of an outsider.

            Comment


            • #7
              Originally posted by Ares Drake View Post
              Last time I tried using it for production was the KQ Infotech port on RHEL6, but that wasn't stable under high IO load (BackupPC Server).
              Since, I have also tried the LLN port to access data on zfs pools for recovery with success, but due to the nature of the usage cannot generalize a recommendation.
              That code had numerous issues; I know because I wrote fixes for several of them. You should have a far better experience with the latest ZFSOnLinux code.

              Originally posted by Ares Drake View Post
              If you do new ZFS benchmarks Michael, please do not just do single disk / SSD benchmarks. These are pointless. A comparison of a multidisk mdadm raid5/6 vs a ZFS raidz /raidz2 would be very interesting though. Added information value could be gained by testing how much a ZIL log or cache device SSD improves speeds.
              That would be great, but I doubt it will happen. Michael informed me that he was not comfortable doing multiple disk benchmarks because he lacked appropriate enterprise hardware when I last spoke to him. This is despite the fact that ZFS will work well without highend hardware (it is a selling point!) and the old disks that he has would be fine. :/

              Comment


              • #8
                Originally posted by garegin View Post
                why don't you, for this job, just use freebsd or even better- solaris? solaris has the best/latest support for zfs
                I prefer apt / debian to FREEBSD. Before I ran debian + ZFSonLinux (in a mostly unrecommended way) I had network stutters and similar issues on my n40l which is a known problem. On debian these seemed to be absent. Also this is a home server, I run more than just filestorage on it and I just prefer to use what I'm used to. It's rock stable for me, running 5 disk raidz2.

                Also its arguably if solaris really has the best support. They have support from Oracle, but the opensource zfs and oracle zfs are now two different beasts. Either way ZFS is a great filesystem and I don't see anything bad about this announcement.
                Last edited by ownagefool; 29 March 2013, 12:33 PM.

                Comment


                • #9
                  Linux is a great piece of technology and it is free and open source.
                  ZFS is a great piece of technology and it is free and open source.

                  It is just so sad that we can't integrate it mainline due to license incompatibilities.

                  License proliferation is harming the free open source software community.

                  Comment


                  • #10
                    Originally posted by uid313 View Post
                    Linux is a great piece of technology and it is free and open source.
                    ZFS is a great piece of technology and it is free and open source.

                    It is just so sad that we can't integrate it mainline due to license incompatibilities.

                    License proliferation is harming the free open source software community.
                    The idea that licensing is a hurdle for Linus is a myth. Linus has a signed-off policy that requires all authors of code submissions to provide signed-off. In the case of ZFSOnLinux, this includes Oracle, myself and others. That policy is the only hurdle:



                    Code:
                            By making a contribution to this project, I certify that:
                    
                            (a) The contribution was created in whole or in part by me and I
                                have the right to submit it under the open source license
                                indicated in the file; or
                    On that note, kernel updates are infrequent and vendors rarely backport filesystem changes. Users rarely update in-tree kernel components even when out-of-tree updates are available (e.g. KVM). This means that putting code into Linus' tree will expose future users to bugs that have long since been fixed unless the code itself was provably correct at the time of submission. With this in mind, I am not certain that I could provide my signed-off to Linus in good conscience should the opportunity arise. Providing signed-off to Linus would be to participate in the condemnation of future users to bugs that could have been avoided. I find the current situation where various package managers manage the installation of ZFS and its updates to be preferable. It ensures that users receive updates in a timely manner and protects users from being exposed to bugs in ancient code.
                    Last edited by ryao; 29 March 2013, 03:53 PM.

                    Comment

                    Working...
                    X