Page 2 of 9 FirstFirst 1234 ... LastLast
Results 11 to 20 of 84

Thread: ZFS On Linux Is Now Set For "Wide Scale Deployment"

  1. #11

    Default

    Quote Originally Posted by uid313 View Post
    Linux is a great piece of technology and it is free and open source.
    ZFS is a great piece of technology and it is free and open source.

    It is just so sad that we can't integrate it mainline due to license incompatibilities.

    License proliferation is harming the free open source software community.
    The idea that licensing is a hurdle for Linus is a myth. Linus has a signed-off policy that requires all authors of code submissions to provide signed-off. In the case of ZFSOnLinux, this includes Oracle, myself and others. That policy is the only hurdle:

    https://www.kernel.org/doc/Documenta...mittingPatches

    Code:
            By making a contribution to this project, I certify that:
    
            (a) The contribution was created in whole or in part by me and I
                have the right to submit it under the open source license
                indicated in the file; or
    On that note, kernel updates are infrequent and vendors rarely backport filesystem changes. Users rarely update in-tree kernel components even when out-of-tree updates are available (e.g. KVM). This means that putting code into Linus' tree will expose future users to bugs that have long since been fixed unless the code itself was provably correct at the time of submission. With this in mind, I am not certain that I could provide my signed-off to Linus in good conscience should the opportunity arise. Providing signed-off to Linus would be to participate in the condemnation of future users to bugs that could have been avoided. I find the current situation where various package managers manage the installation of ZFS and its updates to be preferable. It ensures that users receive updates in a timely manner and protects users from being exposed to bugs in ancient code.
    Last edited by ryao; 03-29-2013 at 03:53 PM.

  2. #12
    Join Date
    Dec 2011
    Posts
    31

    Default Would like to try ZFS, but how?

    The question asked in this thread, "Will you use ZFS?"...

    OK, I'd like to give it a try. Any pointers on how? I'm a Debian user. Can I "apt-get something" to make it work? Is there a Debian fork that has ZFS compiled in the kernel? I'm a reasonably capable Linux geek, but not a developer and there is a limit on how much hacking I'm willing to do to make this work.

    Thanx for any advice.

    - Candide

  3. #13

    Default

    Quote Originally Posted by Candide View Post
    The question asked in this thread, "Will you use ZFS?"...

    OK, I'd like to give it a try. Any pointers on how? I'm a Debian user. Can I "apt-get something" to make it work? Is there a Debian fork that has ZFS compiled in the kernel? I'm a reasonably capable Linux geek, but not a developer and there is a limit on how much hacking I'm willing to do to make this work.

    Thanx for any advice.

    - Candide
    ZFS is in the process of being packaged for Debian. The project for that is here:

    https://alioth.debian.org/projects/pkg-zfsonlinux

    Alternatively, you could use the official documentation to build it yourself locally. It is fairly straightforward:

    http://zfsonlinux.org/generic-deb.html

    Edit: On second thought, were you asking about / on ZFS? If that is the case, you might want to contact the people packaging it on Debian for / on ZFS documentation. The other option is to adapt documentation from other distributions.
    Last edited by ryao; 03-29-2013 at 09:15 PM.

  4. #14
    Join Date
    Jan 2009
    Posts
    1,404

    Default

    Quote Originally Posted by Veerappan View Post
    Will I use ZFS on Linux?

    I already do... I'm attempting to use it to recover a corrupted 3-drive ZFS Raid pool that my NAS ate while I was swapping a drive out (and attempting to re-size the pool at the same time).

    That being said, I broke the pool out of my own stupidity right after I had neglected to take a fresh backup due to impatience... So either I re-write the parts of ZFS that handle the drive labels to remove the checksum verification and convince it that the missing drive is just off-line, or I lose about 10 years of digital photos...

    When I've just left the ZFS array on its own, it has performed wonderfully and reliably in the 3-drive setup in my freenas box. The on-line scrubbing/verification and end-to-end checksums are re-assuring, as well as the fault tolerance for a single-drive failure. Backups are still required anyway, but it's reassuring to know that if a drive dies I have time to find a spare and swap it in without having to scramble.

    Current uptime is only ~90 days, but that's due to some power outages at the beginning of winter.

    Assuming this is for your own use, and it's not written too very often I'd recommend snapraid as you don't have to worry about losing the entire array.

  5. #15

    Default

    Quote Originally Posted by liam View Post
    Assuming this is for your own use, and it's not written too very often I'd recommend snapraid as you don't have to worry about losing the entire array.
    It is more appropriate to call it a pool. Also, Veerappan has not provided enough information for people to make suggestions. A "3-drive ZFS Raid pool" can mean any number of things:

    • 1 mirror vdev with 3 disks
    • 1 raidz1 vdev with 3 disks
    • 1 raidz2 vdev with 3 disks
    • 1 disk vdev and 1 mirrored vdev with 2 disks
    • 1 disk vdev and 1 raidz1 vdev with 2 disks
    • 3 disk vdevs (no redundancy)


    How things can go wrong and how he might recover differs based on which one of those he meant. In the 3 disk vdev case, he would be running the equivalent of "RAID 0". On a related note, there is a fairly interesting write-up about redundancy at the ACM Queue:

    http://queue.acm.org/detail.cfm?id=1670144

    In particular, the main note is that you need at least enough redundancy to survive two simultaneous failures. This would mean that a 3-disk pool should use either mirroring or raidz2. Mirroring would be better from a performance stand point.

  6. #16
    Join Date
    Feb 2013
    Posts
    59

    Arrow

    ZFS IS STUPID.. Why the hell would you use that piece of crap.. The only valid reason I see of using it would be a stop-gap measure until BTRFS becomes officially stable..

    Once Btrfs is stable, it will be the most bestest and technologically advanced filing system in the whole universe.. Even aliens from outer-space will start using it..

    ZFS isn't even compatible with the GPL!.. What does that say about it?.. It might as well be a proprietary file-system then.. Would you be excited if microsoft was about to release some new proprietary file-system??..

    Technically, if oracle wanted, they could just one day suddenly add a new line in their proprietary end-user license agreement that says "We can remotely wipe your whole ZFS hard-drive"...... And then when you cry that they deleted all of your captain picard photographs, they can just say "Well it was OUR file-system, so we can do whatever we want with it.", and then they will make your whole computer explode, just because they can....because they just added that to the license agreement too..

    I guess the moral is that ZFS is suicide, and if you want to die in agnoizing pain, then it is a good choice and I want to be there to hear your screams when you lose every thing dear to you..

    ..Or you can make the right decision and start using Btrfs every day like a good boy.. You will never lose data no matter what, and you don't even need other raid crap any more because some how btrfs is so omniscient that it knows how to do raid some how.. It is so amazing every day for me.. I think you will feel so good using it.. Please use it..

  7. #17
    Join Date
    Jan 2013
    Posts
    1,456

    Default

    Quote Originally Posted by Baconmon View Post
    ZFS IS STUPID.. Why the hell would you use that piece of crap.. The only valid reason I see of using it would be a stop-gap measure until BTRFS becomes officially stable..

    Once Btrfs is stable, it will be the most bestest and technologically advanced filing system in the whole universe.. Even aliens from outer-space will start using it..

    ZFS isn't even compatible with the GPL!.. What does that say about it?.. It might as well be a proprietary file-system then.. Would you be excited if microsoft was about to release some new proprietary file-system??..

    Technically, if oracle wanted, they could just one day suddenly add a new line in their proprietary end-user license agreement that says "We can remotely wipe your whole ZFS hard-drive"...... And then when you cry that they deleted all of your captain picard photographs, they can just say "Well it was OUR file-system, so we can do whatever we want with it.", and then they will make your whole computer explode, just because they can....because they just added that to the license agreement too..

    I guess the moral is that ZFS is suicide, and if you want to die in agnoizing pain, then it is a good choice and I want to be there to hear your screams when you lose every thing dear to you..

    ..Or you can make the right decision and start using Btrfs every day like a good boy.. You will never lose data no matter what, and you don't even need other raid crap any more because some how btrfs is so omniscient that it knows how to do raid some how.. It is so amazing every day for me.. I think you will feel so good using it.. Please use it..
    2/10 bad trolling

  8. #18
    Join Date
    Jan 2009
    Posts
    1,404

    Default

    Quote Originally Posted by ryao View Post
    It is more appropriate to call it a pool. Also, Veerappan has not provided enough information for people to make suggestions. A "3-drive ZFS Raid pool" can mean any number of things:

    • 1 mirror vdev with 3 disks
    • 1 raidz1 vdev with 3 disks
    • 1 raidz2 vdev with 3 disks
    • 1 disk vdev and 1 mirrored vdev with 2 disks
    • 1 disk vdev and 1 raidz1 vdev with 2 disks
    • 3 disk vdevs (no redundancy)


    How things can go wrong and how he might recover differs based on which one of those he meant. In the 3 disk vdev case, he would be running the equivalent of "RAID 0". On a related note, there is a fairly interesting write-up about redundancy at the ACM Queue:

    http://queue.acm.org/detail.cfm?id=1670144

    In particular, the main note is that you need at least enough redundancy to survive two simultaneous failures. This would mean that a 3-disk pool should use either mirroring or raidz2. Mirroring would be better from a performance stand point.
    I believe I gave enough prefacing information to make it clear the type of situation for which snapraid is appropriate.
    Also, I'm not going to argue semantics on this since it's not clear to me which term is correct.
    Lastly, losing complete arrays/pools is exactly the type of thing I try to avoid and is why I don't like striping (I certainly won't argue that it has no place, just that it isn't the best solution for my use case).
    It is fair that you should point out that his exact layout is uncertain, but the specifics of that matter less to me than knowing how he intends to use the drives.

  9. #19
    Join Date
    Oct 2012
    Posts
    165

    Default

    I'd be interested in mounting / on ZFS at Ubuntu install but only if there's a simple solution.

  10. #20
    Join Date
    Oct 2007
    Posts
    1,275

    Default

    Quote Originally Posted by Baconmon View Post
    ZFS isn't even compatible with the GPL!.. What does that say about it?
    Absolutely nothing. I would use it in a heartbeat if I did RAID's. Maybe btrfs will be on-par with ZFS in about 5 years...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •