Announcement

Collapse
No announcement yet.

Btrfs Will Finally "Strongly Discourage" You When Creating RAID5 / RAID6 Arrays

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by mppix View Post

    Benefits like?
    I have 4 SSDs set up with mdadm RAID10 and after hours of tuning and testing, I am still not impressed with IOPS and 99% latency is significantly higher than single disk.
    BTRFS is right in generally recommending raid 1 (or raid 1c2, c3, ..)
    With that setup you could be hitting the limitations of the file system and/or the raid controller. Anecdotally, I've read that F2FS greatly outperforms Ext4 in regards to SSD mirrored stripes.

    Comment


    • #52
      Originally posted by horizonbrave View Post

      ZFS has a built-in email alert system, doesn't BTRFS have one or any plans that you know it might be implemented?
      Thanks
      I don't think that's planned. Some systemd-journal magic or log parsing would be necessary (hence, why I refuse to do so ). ZED is very handful indeed.

      EDIT: Typo.

      Comment


      • #53
        Originally posted by drjohnnyfever View Post
        So it would be relatively pointless to post the obligatory "ZFS RaidZ1/2/3 work really well". But I will say that I've been extremely happy with it. And they came out with RaidZ3 specifically because of the risk of disk failures during rebuilds with very large disks.

        No matter what FS/Vol management you are using for peak performance stripes and/or stripes-of-mirrors are the way to go.
        https://youtu.be/xWjOh0Ph8uM?t=458

        Comment


        • #54
          Originally posted by mppix View Post
          What is your point? He found some workload a year ago that had issues? He just posted a video last month about building an all flash NAS using RAIDZ2 with TrueNAS. I guess ZFS is just totally useless.

          Comment


          • #55
            Originally posted by drjohnnyfever View Post
            What is your point? He found some workload a year ago that had issues? He just posted a video last month about building an all flash NAS using RAIDZ2 with TrueNAS. I guess ZFS is just totally useless.
            Point?
            (1) NVMe and raid is a nontrivial topic
            (2) ZFS is a software written by humans
            (3) Why is there always someone posting about ZFS greatness in BTRFS posts? Go to your own threads!
            (4) TrueNAS is moving to Debian. Let us see how long it takes them to offer a BTRFS alternative..

            Comment


            • #56
              Originally posted by skeevy420 View Post
              With that setup you could be hitting the limitations of the file system and/or the raid controller. Anecdotally, I've read that F2FS greatly outperforms Ext4 in regards to SSD mirrored stripes.
              Interesting. I'm using mdadm and I'm getting the impression that it is more of a platform bottleneck (2channel DDR 4 with 8 core processor). Xfs and ext4 behave similarly but did not try f2fs.

              Comment


              • #57
                Originally posted by mppix View Post

                Point?
                (1) NVMe and raid is a nontrivial topic
                (2) ZFS is a software written by humans
                (3) Why is there always someone posting about ZFS greatness in BTRFS posts? Go to your own threads!
                (4) TrueNAS is moving to Debian. Let us see how long it takes them to offer a BTRFS alternative..
                1) Okay... Didn't say it wasn't
                2) Yes, Didn't say it wasn't
                3) Its perfectly relevant when we're talking about the pitfalls of RAID5/6. These issues have been around a long time and people have done a bunch of work on resolving them.
                4) They are still doing a ton of new feature development on FreeBSD so, I'm not sure that is accurate. They came out with a product for running Linux containers is what we know for sure. And they seem pretty committed to ZFS considering they are paying devs to upstream work to OpenZFS.

                Comment


                • #58
                  Originally posted by mppix View Post
                  Interesting. I'm using mdadm and I'm getting the impression that it is more of a platform bottleneck (2channel DDR 4 with 8 core processor). Xfs and ext4 behave similarly but did not try f2fs.
                  It's on Wikipedia so it has to be true

                  But with that kind of hardware and setup I imagine that finding the bottleneck is a pain in the ass.

                  Comment


                  • #59
                    Originally posted by drjohnnyfever View Post
                    So it would be relatively pointless to post the obligatory "ZFS RaidZ1/2/3 work really well". But I will say that I've been extremely happy with it. And they came out with RaidZ3 specifically because of the risk of disk failures during rebuilds with very large disks.

                    No matter what FS/Vol management you are using for peak performance stripes and/or stripes-of-mirrors are the way to go.
                    ZFS as a whole does work really well — that is, if you are willing to constrain yourself with ZFS' design limitations (like inability to restripe, change RAID levels, inability to reduce size of individual volume members, loss of performance with nearly-full volumes and limited interoperability with Linux page cache leading to suboptimal memory management and data duplication between page cache and ARC).

                    I don't fit into these constraints, so ZFS doesn't really exist for me.
                    intelfx
                    Senior Member
                    Last edited by intelfx; 08 March 2021, 12:55 PM.

                    Comment


                    • #60
                      Originally posted by intelfx View Post

                      ZFS as a whole does work really well — that is, if you are willing to constrain yourself with ZFS' design limitations (like inability to restripe, inability to reduce size of individual volume members, horrible performance with nearly-full volumes and limited interoperability with Linux page cache leading to suboptimal memory management and data duplication in RAM), not to mention the ongoing licensing issues.

                      Btrfs was explicitly designed not have those limitations.
                      Every filesystem has design limitations. Some of them are fixable. I'm not aware of any filesystem that has no allocation performance degradation on a nearly full volume, presumably btrfs does handle it significantly better than ZFS does. That is a known constraint of ZFS and not a fundamental flaw as such as I see it. On any filesystem if you hit 100% utilization you need to buy bigger disks. On ZFS if you hit 80% utilization you should buy bigger disks. For the applications where it is used that is generally an acceptable tradeoff for its features you get in return. As to the pagecache issues, yes that is a known consideration. Its also a solvable one. Solvable enough that Oracle effectively completely fixed it in their presumed dead proprietary ZFS. It's also not a performance bottleneck in the vast majority of cases. Somewhat separately, yet related, ZFS on Linux has long had memory management related issues that the other platforms have not had to deal with, and there is ongoing work there.

                      I did not intend to just blindly rag on btrfs just for the sake of it. I'm just pointing out in the case of RAID5/6 the ZFS devs identified these problems a long time ago, recommended you don't use RAID5 back in 2005 and produced an alternative that solves a lot of the issues there. The right hole, rebuild times for mostly empty pools, disk failures during rebuilds/resilvers. Now (2021) ZFS has sequential resilver and draid which continue to address issues in that area.

                      I'm sure there are tons of people using btrfs and they are extremely happy with it. There are people I'm sure who used ZFS and moved to btrfs and it makes their life better. But there also a lot of people using ZFS that moved off name-your-favorite-fs and it solves their problems better.

                      As to the licensing issues thing. There are no 'ongoing licensing issues.' Linux is GPLv2 and ZFS is CDDL. That isn't an 'ongoing licensing issue' There is nothing stopping distros from shipping default root-on-zfs if they want. There is nothing stopping you from building or installing ZFS yourself, legal or otherwise. The ONLY issue is that ZFS isn't going to be mainlined into the Linux kernel. At this point nobody cares. The licensing issue is a non-issue.

                      So yes does Btrfs have a design that 'fixes' some aspects of ZFS, but it also has its own issues and limitations as well. Some issues, like with RAID5/6, btrfs chose, for some reason, to ignore the problems pointed and addressed by ZFS more than a decade ago. I twould have been spectacular if they had said we saw what they were doing with RAIDZ and we have decided to improve on those principles etc/whatever (or solve it a different way) but they didn't. So don't go on telling me that btrfs fixes all these issues with ZFS and pretend like it is uniformly superior in every detail and it isn't worth discussing.

                      Comment

                      Working...
                      X