Announcement

Collapse
No announcement yet.

ZFS On Linux Is Now Set For "Wide Scale Deployment"

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    ZFS IS STUPID.. Why the hell would you use that piece of crap.. The only valid reason I see of using it would be a stop-gap measure until BTRFS becomes officially stable..

    Once Btrfs is stable, it will be the most bestest and technologically advanced filing system in the whole universe.. Even aliens from outer-space will start using it..

    ZFS isn't even compatible with the GPL!.. What does that say about it?.. It might as well be a proprietary file-system then.. Would you be excited if microsoft was about to release some new proprietary file-system??..

    Technically, if oracle wanted, they could just one day suddenly add a new line in their proprietary end-user license agreement that says "We can remotely wipe your whole ZFS hard-drive"...... And then when you cry that they deleted all of your captain picard photographs, they can just say "Well it was OUR file-system, so we can do whatever we want with it.", and then they will make your whole computer explode, just because they can....because they just added that to the license agreement too..

    I guess the moral is that ZFS is suicide, and if you want to die in agnoizing pain, then it is a good choice and I want to be there to hear your screams when you lose every thing dear to you..

    ..Or you can make the right decision and start using Btrfs every day like a good boy.. You will never lose data no matter what, and you don't even need other raid crap any more because some how btrfs is so omniscient that it knows how to do raid some how.. It is so amazing every day for me.. I think you will feel so good using it.. Please use it..

    Comment


    • #17
      Originally posted by Baconmon View Post
      ZFS IS STUPID.. Why the hell would you use that piece of crap.. The only valid reason I see of using it would be a stop-gap measure until BTRFS becomes officially stable..

      Once Btrfs is stable, it will be the most bestest and technologically advanced filing system in the whole universe.. Even aliens from outer-space will start using it..

      ZFS isn't even compatible with the GPL!.. What does that say about it?.. It might as well be a proprietary file-system then.. Would you be excited if microsoft was about to release some new proprietary file-system??..

      Technically, if oracle wanted, they could just one day suddenly add a new line in their proprietary end-user license agreement that says "We can remotely wipe your whole ZFS hard-drive"...... And then when you cry that they deleted all of your captain picard photographs, they can just say "Well it was OUR file-system, so we can do whatever we want with it.", and then they will make your whole computer explode, just because they can....because they just added that to the license agreement too..

      I guess the moral is that ZFS is suicide, and if you want to die in agnoizing pain, then it is a good choice and I want to be there to hear your screams when you lose every thing dear to you..

      ..Or you can make the right decision and start using Btrfs every day like a good boy.. You will never lose data no matter what, and you don't even need other raid crap any more because some how btrfs is so omniscient that it knows how to do raid some how.. It is so amazing every day for me.. I think you will feel so good using it.. Please use it..
      2/10 bad trolling

      Comment


      • #18
        Originally posted by ryao View Post
        It is more appropriate to call it a pool. Also, Veerappan has not provided enough information for people to make suggestions. A "3-drive ZFS Raid pool" can mean any number of things:
        • 1 mirror vdev with 3 disks
        • 1 raidz1 vdev with 3 disks
        • 1 raidz2 vdev with 3 disks
        • 1 disk vdev and 1 mirrored vdev with 2 disks
        • 1 disk vdev and 1 raidz1 vdev with 2 disks
        • 3 disk vdevs (no redundancy)

        How things can go wrong and how he might recover differs based on which one of those he meant. In the 3 disk vdev case, he would be running the equivalent of "RAID 0". On a related note, there is a fairly interesting write-up about redundancy at the ACM Queue:

        http://queue.acm.org/detail.cfm?id=1670144

        In particular, the main note is that you need at least enough redundancy to survive two simultaneous failures. This would mean that a 3-disk pool should use either mirroring or raidz2. Mirroring would be better from a performance stand point.
        I believe I gave enough prefacing information to make it clear the type of situation for which snapraid is appropriate.
        Also, I'm not going to argue semantics on this since it's not clear to me which term is correct.
        Lastly, losing complete arrays/pools is exactly the type of thing I try to avoid and is why I don't like striping (I certainly won't argue that it has no place, just that it isn't the best solution for my use case).
        It is fair that you should point out that his exact layout is uncertain, but the specifics of that matter less to me than knowing how he intends to use the drives.

        Comment


        • #19
          I'd be interested in mounting / on ZFS at Ubuntu install but only if there's a simple solution.

          Comment


          • #20
            Originally posted by Baconmon View Post
            ZFS isn't even compatible with the GPL!.. What does that say about it?
            Absolutely nothing. I would use it in a heartbeat if I did RAID's. Maybe btrfs will be on-par with ZFS in about 5 years...

            Comment


            • #21
              Originally posted by mike4 View Post
              I'd be interested in mounting / on ZFS at Ubuntu install but only if there's a simple solution.
              There is a howto available:

              https://github.com/zfsonlinux/pkg-zf...oot-Filesystem

              Comment


              • #22
                BTFRS on Fedora18 not yet production quality.

                Originally posted by timemaster View Post
                I have been using zfs for quite a while now.
                while zfs and btrfs have a similar design, there is a big difference between them.
                ZFS has been released as a stable filesystem many years ago by Sun Microsystem, and is now released as stable on linux,
                while btrfs is not yet stable after many years and still considered experimental/in developpment/not stable.
                If you want a filesystem with btree functionnality in production, your only choice is zfs.
                one biased opinion to read :
                http://rudd-o.com/linux-and-free-sof...ter-than-btrfs
                Started my system one day to have a message, disk full, cannot write to it. I was using btfrs as installed by Fedora 18 Anaconda.
                I tried cleaning up by visiting /tmp and removing a few files. (YUM istallers as rm -fr yum2013*
                df showed that there was lots of empty space. I figured I would delete some files and perhaps that would help.

                Next thing I saw was a dozen yum files in /home, and my /boot was empty.
                I have multiboot, so I restarted with Mint14.
                Mint could see the files but when it tried deleting, it also got the messages "cannot write, unable to delete".
                Fortunately, Mint could read what was left on the disk. I backed up again (2nd backup) and essentially reinstalled using lvm and ext4.

                I will try btfrs again as the main file system for Fedora 19, but only on a separate test system on a drive reserved for testing.

                Comment


                • #23
                  Originally posted by lsatenstein View Post
                  Started my system one day to have a message, disk full, cannot write to it. I was using btfrs as installed by Fedora 18 Anaconda.
                  I tried cleaning up by visiting /tmp and removing a few files. (YUM istallers as rm -fr yum2013*
                  df showed that there was lots of empty space. I figured I would delete some files and perhaps that would help.

                  Next thing I saw was a dozen yum files in /home, and my /boot was empty.
                  I have multiboot, so I restarted with Mint14.
                  Mint could see the files but when it tried deleting, it also got the messages "cannot write, unable to delete".
                  Fortunately, Mint could read what was left on the disk. I backed up again (2nd backup) and essentially reinstalled using lvm and ext4.

                  I will try btfrs again as the main file system for Fedora 19, but only on a separate test system on a drive reserved for testing.
                  The behavior that you observed in btrfs is by design. See the following:

                  http://lwn.net/Articles/393148/

                  Fixing this requires making btrfs2. If you decide to use btrfs before btrfs2 is made, you could do a manual rebalance as a workaround whenever this happens. That will make things work until it happens again. Alternatively, you could use ZFS, which does not suffer from this problem.

                  Comment


                  • #24
                    I have never encountered such a situation. And even if it really is due to Btrfs fragmentation, it includes a mount option "autodefrag" that does what it says on the tin. They are still trying to make sure that it works correctly with huge files before making it default, but it doesn't hurt to enable it right now as well.

                    Comment


                    • #25
                      Originally posted by GreatEmerald View Post
                      I have never encountered such a situation. And even if it really is due to Btrfs fragmentation, it includes a mount option "autodefrag" that does what it says on the tin. They are still trying to make sure that it works correctly with huge files before making it default, but it doesn't hurt to enable it right now as well.
                      I am told that defragmentation and rebalancing are mean two separate things in btrfs. That makes sense because storing files linearly on disk does not imply that the blocks are part of a balanced tree. If defragmentation and rebalancing are two separate things as I am told, the autodefrag mount option will not help. Additionally, developers in #btrfs on freenode informed me a few months ago that there is no automated rebalancing code in btrfs. Unless that has changed, there is no chance of the kernel proactively rebalancing to try to avoid this issue. You could need to run a rebalance on a cron job. However, you might need to schedule downtime whenever the cron job runs because it can have a crippling effect on IO. That is especially true when btrfs is mounted with discard on SATA drives that predate SATA 3.1. That is because ATA TRIM is an unqueued command and btrfs does not really use discard until a rebalance is done. One of the challenges associated with automated rebalancing is minimizing the processes' performance penalty.

                      ZFS has a feature called a zvol, which is a virtual block device that will report space usage in a manner that will grow and shrink as data is written and discarded. It offers an excellent opportunity to examine the discard behavior of various filesystems by formatting the zvol with them, mounting them with discard, writing numerous files and then removing them. I did this with a few filesystems, including btrfs. In the case of btrfs, I found that reported space utilization barely decreased following deletion of Gentoo's portage tree from what it was before I unpacked it. With that in mind, I would not be surprised if further analysis revealed the ENOSPC errors to be related to the discard behavior. In particular, if you format a ZFS zvol with btrfs and mount btrfs with discard, you could discover that ENOSPC in btrfs occurs whenever ZFS indicates that the zvol is full.

                      With that said, this is somewhat offtopic. This thread is about whether or not people plan to try ZFS, not how to fix btrfs.

                      Comment


                      • #26
                        Originally posted by ryao View Post
                        In the case of btrfs, I found that reported space utilization barely decreased following deletion of Gentoo's portage tree from what it was before I unpacked it.
                        To be clear, I meant before I ran `rm -r`.

                        Edit: I am told by developers in #btrfs on freenode that Edward's observations were caused by two separate bugs that have since been fixed. The last remaining issue in the btrfs code that can cause ENOSPC problems involves space being allocated to metadata without any way to automatically reclaim it once freed. The need to do manual rebalancing to resolve ENOSPC issues should disappear after that code is written.

                        With that said, ZFS does not suffer from this problem and is ready for deployment now.
                        Last edited by ryao; 03-30-2013, 06:44 PM.

                        Comment


                        • #27
                          Originally posted by ryao View Post
                          The behavior that you observed in btrfs is by design. See the following:

                          http://lwn.net/Articles/393148/

                          Fixing this requires making btrfs2. If you decide to use btrfs before btrfs2 is made, you could do a manual rebalance as a workaround whenever this happens. That will make things work until it happens again. Alternatively, you could use ZFS, which does not suffer from this problem.

                          while you're still observing this thread:


                          I'm seriously considering switching to ZFS on my laptop's /home partition

                          the only insecurity/doubt I currently have is:

                          what effect will ZFS have on battery life and CPU consumption (probably comparable to Btrfs with gzip-compression or slightly higher) ?


                          I'm planning on using gzip-compression with default compression-strength (due to constrained space - I'm only using a 1TB hdd so there's no option of buying a bigger one)


                          Many thanks in advance for your answer

                          Comment


                          • #28
                            Originally posted by kernelOfTruth View Post
                            while you're still observing this thread:


                            I'm seriously considering switching to ZFS on my laptop's /home partition

                            the only insecurity/doubt I currently have is:

                            what effect will ZFS have on battery life and CPU consumption (probably comparable to Btrfs with gzip-compression or slightly higher) ?


                            I'm planning on using gzip-compression with default compression-strength (due to constrained space - I'm only using a 1TB hdd so there's no option of buying a bigger one)


                            Many thanks in advance for your answer
                            CPU utilization will likely increase by a few percentage points. However, I cannot speculate on what the effect will be on battery life. ARC could improve battery life while the periodic transaction commit (every 5 seconds) could harm battery life. If I were you, I would put everything on ZFS, rather than just /home.

                            With that said, you should consider LZ4 compression. It has a few properties that make it more appealing. One is that LZ4's throughput is an order of magnitude greater than gzip, which saves CPU time. Even more appealing is that LZ4 is extremely quick at identifying incompressible data. This property saves even more CPU time. As far as mobile use are concerned, these properties should translate into power savings in comparison to the situation where you use gzip. If you are interested in reading about LZ4, the following links are rather informative:

                            https://extrememoderate.wordpress.com/2011/08/
                            http://denisy.dyndns.org/lzo_vs_lzjb/
                            http://wiki.illumos.org/display/illumos/LZ4+Compression
                            https://code.google.com/p/lz4/

                            There are a few things to keep in mind when looking at those links:
                            1. The first link involves LZ4 r11. The version that ZFS imported is r67 (+/- 2, I forget the exact revision that was imported) and LZ4 has seen plenty of improvements since r11. Of particular interest is the time spent detecting incompressible data.
                            2. The second link compares LZO with LZ4. LZO is considered to be quick, but LZ4 beats it in every metric.
                            3. The third link is Illumos' writeup on LZ4, which compares it to ZFS' lzjb. lzjb was invented to obtain fair compression, have high throughput and detect incompressible data quickly. Those metrics are considered desireable for use in filesystems. LZ4 does so much better than lzjb in all of them that the Illumos developers initially thought that it was too good to be true when it was first discussed on their mailing list.
                            4. The fourth link is the LZ4 project page, which has a chart comparing the throughput and compression ratio of LZ4 to other compression algorithms. Of particular interest it that it shows that LZ4 has an average compression ratio of 2.1 while gzip -1 has an average compression ratio of 2.7, so you do not really pay much in terms of space for the benefits of LZ4

                            Comment


                            • #29
                              Originally posted by ryao View Post
                              The behavior that you observed in btrfs is by design. See the following:

                              http://lwn.net/Articles/393148/

                              Fixing this requires making btrfs2. If you decide to use btrfs before btrfs2 is made, you could do a manual rebalance as a workaround whenever this happens. That will make things work until it happens again. Alternatively, you could use ZFS, which does not suffer from this problem.
                              While his concerns may be true, one cannot make the claims you've made here based on his claims alone given the following link: http://lwn.net/Articles/393985/

                              Comment


                              • #30
                                Seriously now, is there any benefits to using ZFS or btrfs over ext3 or even ext 4 or even XFS as a desktop filesystem?

                                I highly doubt I will ever use ZFS unless the distro's installer offers it during the partitioning process.

                                Comment

                                Working...
                                X