Announcement

Collapse
No announcement yet.

Oracle Talks Up Btrfs Rather Than ZFS For Their Unbreakable Enterprise Kernel 6

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Spam View Post
    Does ZFS support shrinking a pool or changing raid profiles?

    I have a btrfs setup with 3 spanned disks (with metadata raid1). One disk is starting to fail and I can't replace it today, so I'm tellingtelling btrfs to remove that disk and shrink the fs. All done live with no downtime
    Oracle' s iteration of ZFS in Solaris 11.4 (From March 2018) allows shrinking pools/removing drives.

    Comment


    • #22
      Originally posted by aht0 View Post

      Oracle' s iteration of ZFS in Solaris 11.4 (From March 2018) allows shrinking pools/removing drives.
      How?

      Quote, please.

      Comment


      • #23
        Originally posted by Raka555 View Post
        I was a big fan of ZFS when I discovered it around 2007. I used it just about everywhere and thought it was the best thing ever.

        Then as those filesystems got used more, they all "fell of the cliff" performance wise when they reached about 75%-80% space utilization.

        I was called into emergency meetings and stuff where I had to explain to the bosses why they can't use more than 75% disk space of their very expensive SAN/SSD storage.

        I lost my appetite for ZFS as a result.

        I am curious:
        Have that problem, been fixed ?
        Does nobody else run into that problem ?
        Are all ZFS deployments just toys that does'nt do real IO ?
        Bad design/admin/planning on your part. That behaviour/performance penalty can be expected to happen with CoW file systems (incl. Btrfs).
        You want to get around it, plan for 20% excess free space that's not going to be utilized. Make a file system or volume that is not used to store any data, but which has a size reservation of about 20% of pool capacity. End of story.

        Comment


        • #24
          Originally posted by aht0 View Post
          Bad design/admin/planning on your part. That behaviour/performance penalty can be expected to happen with CoW file systems (incl. Btrfs).
          You want to get around it, plan for 20% excess free space that's not going to be utilized. Make a file system or volume that is not used to store any data, but which has a size reservation of about 20% of pool capacity. End of story.
          Hilarious. Our system is wildly inefficient = users are stupid.

          Comment


          • #25
            Originally posted by aht0 View Post

            Oracle' s iteration of ZFS in Solaris 11.4 (From March 2018) allows shrinking pools/removing drives.
            I read that. How is it in Linux?

            Comment


            • #26
              Originally posted by siyia View Post
              I' ve been using btrfs for 5+ years non-commercially together with snapshots and compression (single disk setups) and i seriously don't understand the hate around it, maybe it fails mostly in raid setups? Before trying btrfs, i had a zpool for my root&home using openzfs in archlinux and the filesystem was using a ton of memory, fs speed on day to day usage wasn't significantly faster too with openzfs.
              Same here. I had much more cases of data loss with ext4 than btrfs. Actually I never had big issues with btrfs at all while ext4 failed me 5 or 6 times badly now (mostly on external drives, now I use btrfs on those as well).
              The only real issue I have with it is that it tends to cause much more defragmentation compared to other filesystems. But that's a general problem of COW-filesystems and with the ongoing change to SSDs it won't matter anymore.

              I think most of the hate is coming from the missing (and messed up) RAID5/6 support. And that was indeed a pretty dark story.
              Thankfully it seems that finally somebody stepped up to fix that and it seems there is progress. Slowly, but at least there is progress.

              The "standard" feature set of btrfs works nicely though.

              Comment


              • #27
                Originally posted by Raka555 View Post
                I was a big fan of ZFS when I discovered it around 2007. I used it just about everywhere and thought it was the best thing ever.

                Then as those filesystems got used more, they all "fell of the cliff" performance wise when they reached about 75%-80% space utilization.

                I was called into emergency meetings and stuff where I had to explain to the bosses why they can't use more than 75% disk space of their very expensive SAN/SSD storage.

                I lost my appetite for ZFS as a result.

                I am curious:
                Have that problem, been fixed ?
                Does nobody else run into that problem ?
                Are all ZFS deployments just toys that does'nt do real IO ?


                It is (was) a well known issue, you could google for zfs performance degradation at 80% utilization.

                Comment


                • #28
                  Originally posted by andyprough View Post
                  Hilarious. Our system is wildly inefficient = users are stupid.
                  No feature ever comes for free. It just happens that the cost of using CoW is that you cannot fill an entire disk and keep the performance. That CoW is used gives you benefits though so aht0 is correct in that Raka555 used ZFS wrongly if they didn't checked the specs of ZFS before using it.

                  Comment


                  • #29
                    Originally posted by aht0 View Post
                    Bad design/admin/planning on your part. That behaviour/performance penalty can be expected to happen with CoW file systems (incl. Btrfs).
                    You want to get around it, plan for 20% excess free space that's not going to be utilized. Make a file system or volume that is not used to store any data, but which has a size reservation of about 20% of pool capacity. End of story.
                    Hind sight is always 20/20

                    At thousands of dollars/TB of SAN space, having to buy 25-30 % more, is a hard sell to management.

                    Comment


                    • #30
                      Originally posted by Spam View Post
                      Does ZFS support shrinking a pool or changing raid profiles?

                      I have a btrfs setup with 3 spanned disks (with metadata raid1). One disk is starting to fail and I can't replace it today, so I'm tellingtelling btrfs to remove that disk and shrink the fs. All done live with no downtime
                      Yes, depending on available storage. Basically, as long as there is enough data on the previous disks you can remove a disk from your setup. It's one of the reasons I stick to using mirrors -- I'm guaranteed to have the space available. If one disk starts to go bad it's a simple as "zpool detatch mirror_name /dev/disk/by-id/disk-id".

                      Comment

                      Working...
                      X