Announcement

Collapse
No announcement yet.

ZFS On Linux Is Now Set For "Wide Scale Deployment"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    BTFRS on Fedora18 not yet production quality.

    Originally posted by timemaster
    I have been using zfs for quite a while now.
    while zfs and btrfs have a similar design, there is a big difference between them.
    ZFS has been released as a stable filesystem many years ago by Sun Microsystem, and is now released as stable on linux,
    while btrfs is not yet stable after many years and still considered experimental/in developpment/not stable.
    If you want a filesystem with btree functionnality in production, your only choice is zfs.
    one biased opinion to read :
    http://rudd-o.com/linux-and-free-sof...ter-than-btrfs
    Started my system one day to have a message, disk full, cannot write to it. I was using btfrs as installed by Fedora 18 Anaconda.
    I tried cleaning up by visiting /tmp and removing a few files. (YUM istallers as rm -fr yum2013*
    df showed that there was lots of empty space. I figured I would delete some files and perhaps that would help.

    Next thing I saw was a dozen yum files in /home, and my /boot was empty.
    I have multiboot, so I restarted with Mint14.
    Mint could see the files but when it tried deleting, it also got the messages "cannot write, unable to delete".
    Fortunately, Mint could read what was left on the disk. I backed up again (2nd backup) and essentially reinstalled using lvm and ext4.

    I will try btfrs again as the main file system for Fedora 19, but only on a separate test system on a drive reserved for testing.

    Comment


    • #22
      Originally posted by lsatenstein View Post
      Started my system one day to have a message, disk full, cannot write to it. I was using btfrs as installed by Fedora 18 Anaconda.
      I tried cleaning up by visiting /tmp and removing a few files. (YUM istallers as rm -fr yum2013*
      df showed that there was lots of empty space. I figured I would delete some files and perhaps that would help.

      Next thing I saw was a dozen yum files in /home, and my /boot was empty.
      I have multiboot, so I restarted with Mint14.
      Mint could see the files but when it tried deleting, it also got the messages "cannot write, unable to delete".
      Fortunately, Mint could read what was left on the disk. I backed up again (2nd backup) and essentially reinstalled using lvm and ext4.

      I will try btfrs again as the main file system for Fedora 19, but only on a separate test system on a drive reserved for testing.
      The behavior that you observed in btrfs is by design. See the following:



      Fixing this requires making btrfs2. If you decide to use btrfs before btrfs2 is made, you could do a manual rebalance as a workaround whenever this happens. That will make things work until it happens again. Alternatively, you could use ZFS, which does not suffer from this problem.

      Comment


      • #23
        I have never encountered such a situation. And even if it really is due to Btrfs fragmentation, it includes a mount option "autodefrag" that does what it says on the tin. They are still trying to make sure that it works correctly with huge files before making it default, but it doesn't hurt to enable it right now as well.

        Comment


        • #24
          Originally posted by GreatEmerald View Post
          I have never encountered such a situation. And even if it really is due to Btrfs fragmentation, it includes a mount option "autodefrag" that does what it says on the tin. They are still trying to make sure that it works correctly with huge files before making it default, but it doesn't hurt to enable it right now as well.
          I am told that defragmentation and rebalancing are mean two separate things in btrfs. That makes sense because storing files linearly on disk does not imply that the blocks are part of a balanced tree. If defragmentation and rebalancing are two separate things as I am told, the autodefrag mount option will not help. Additionally, developers in #btrfs on freenode informed me a few months ago that there is no automated rebalancing code in btrfs. Unless that has changed, there is no chance of the kernel proactively rebalancing to try to avoid this issue. You could need to run a rebalance on a cron job. However, you might need to schedule downtime whenever the cron job runs because it can have a crippling effect on IO. That is especially true when btrfs is mounted with discard on SATA drives that predate SATA 3.1. That is because ATA TRIM is an unqueued command and btrfs does not really use discard until a rebalance is done. One of the challenges associated with automated rebalancing is minimizing the processes' performance penalty.

          ZFS has a feature called a zvol, which is a virtual block device that will report space usage in a manner that will grow and shrink as data is written and discarded. It offers an excellent opportunity to examine the discard behavior of various filesystems by formatting the zvol with them, mounting them with discard, writing numerous files and then removing them. I did this with a few filesystems, including btrfs. In the case of btrfs, I found that reported space utilization barely decreased following deletion of Gentoo's portage tree from what it was before I unpacked it. With that in mind, I would not be surprised if further analysis revealed the ENOSPC errors to be related to the discard behavior. In particular, if you format a ZFS zvol with btrfs and mount btrfs with discard, you could discover that ENOSPC in btrfs occurs whenever ZFS indicates that the zvol is full.

          With that said, this is somewhat offtopic. This thread is about whether or not people plan to try ZFS, not how to fix btrfs.

          Comment


          • #25
            Originally posted by ryao View Post
            In the case of btrfs, I found that reported space utilization barely decreased following deletion of Gentoo's portage tree from what it was before I unpacked it.
            To be clear, I meant before I ran `rm -r`.

            Edit: I am told by developers in #btrfs on freenode that Edward's observations were caused by two separate bugs that have since been fixed. The last remaining issue in the btrfs code that can cause ENOSPC problems involves space being allocated to metadata without any way to automatically reclaim it once freed. The need to do manual rebalancing to resolve ENOSPC issues should disappear after that code is written.

            With that said, ZFS does not suffer from this problem and is ready for deployment now.
            Last edited by ryao; 30 March 2013, 06:44 PM.

            Comment


            • #26
              Originally posted by ryao View Post
              The behavior that you observed in btrfs is by design. See the following:



              Fixing this requires making btrfs2. If you decide to use btrfs before btrfs2 is made, you could do a manual rebalance as a workaround whenever this happens. That will make things work until it happens again. Alternatively, you could use ZFS, which does not suffer from this problem.

              while you're still observing this thread:


              I'm seriously considering switching to ZFS on my laptop's /home partition

              the only insecurity/doubt I currently have is:

              what effect will ZFS have on battery life and CPU consumption (probably comparable to Btrfs with gzip-compression or slightly higher) ?


              I'm planning on using gzip-compression with default compression-strength (due to constrained space - I'm only using a 1TB hdd so there's no option of buying a bigger one)


              Many thanks in advance for your answer

              Comment


              • #27
                Originally posted by kernelOfTruth View Post
                while you're still observing this thread:


                I'm seriously considering switching to ZFS on my laptop's /home partition

                the only insecurity/doubt I currently have is:

                what effect will ZFS have on battery life and CPU consumption (probably comparable to Btrfs with gzip-compression or slightly higher) ?


                I'm planning on using gzip-compression with default compression-strength (due to constrained space - I'm only using a 1TB hdd so there's no option of buying a bigger one)


                Many thanks in advance for your answer
                CPU utilization will likely increase by a few percentage points. However, I cannot speculate on what the effect will be on battery life. ARC could improve battery life while the periodic transaction commit (every 5 seconds) could harm battery life. If I were you, I would put everything on ZFS, rather than just /home.

                With that said, you should consider LZ4 compression. It has a few properties that make it more appealing. One is that LZ4's throughput is an order of magnitude greater than gzip, which saves CPU time. Even more appealing is that LZ4 is extremely quick at identifying incompressible data. This property saves even more CPU time. As far as mobile use are concerned, these properties should translate into power savings in comparison to the situation where you use gzip. If you are interested in reading about LZ4, the following links are rather informative:




                Extremely Fast Compression algorithm. Contribute to lz4/lz4 development by creating an account on GitHub.


                There are a few things to keep in mind when looking at those links:
                1. The first link involves LZ4 r11. The version that ZFS imported is r67 (+/- 2, I forget the exact revision that was imported) and LZ4 has seen plenty of improvements since r11. Of particular interest is the time spent detecting incompressible data.
                2. The second link compares LZO with LZ4. LZO is considered to be quick, but LZ4 beats it in every metric.
                3. The third link is Illumos' writeup on LZ4, which compares it to ZFS' lzjb. lzjb was invented to obtain fair compression, have high throughput and detect incompressible data quickly. Those metrics are considered desireable for use in filesystems. LZ4 does so much better than lzjb in all of them that the Illumos developers initially thought that it was too good to be true when it was first discussed on their mailing list.
                4. The fourth link is the LZ4 project page, which has a chart comparing the throughput and compression ratio of LZ4 to other compression algorithms. Of particular interest it that it shows that LZ4 has an average compression ratio of 2.1 while gzip -1 has an average compression ratio of 2.7, so you do not really pay much in terms of space for the benefits of LZ4

                Comment


                • #28
                  Originally posted by ryao View Post
                  The behavior that you observed in btrfs is by design. See the following:



                  Fixing this requires making btrfs2. If you decide to use btrfs before btrfs2 is made, you could do a manual rebalance as a workaround whenever this happens. That will make things work until it happens again. Alternatively, you could use ZFS, which does not suffer from this problem.
                  While his concerns may be true, one cannot make the claims you've made here based on his claims alone given the following link: http://lwn.net/Articles/393985/

                  Comment


                  • #29
                    Seriously now, is there any benefits to using ZFS or btrfs over ext3 or even ext 4 or even XFS as a desktop filesystem?

                    I highly doubt I will ever use ZFS unless the distro's installer offers it during the partitioning process.

                    Comment


                    • #30
                      Originally posted by Sonadow View Post
                      Seriously now, is there any benefits to using ZFS or btrfs over ext3 or even ext 4 or even XFS as a desktop filesystem?
                      No.

                      /10chars

                      Comment

                      Working...
                      X