Announcement

Collapse
No announcement yet.

Btrfs Adds Degenerate RAID Support, Performance Improvements With Linux 5.15

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mppix View Post
    I would argue that this thread is not about ZFS.
    However, for context: ZFS is old and was designed in the HDD era when raid 5/6 was more relevant. Today, if you load up an AMD EPYC with 24 nvme drives, ZFS itself is the bottleneck.
    ZFS is also _by_far_ not the only enterprise storage solution, especially for bulk storage that goes beyond one server, when we start talking about scale-out network filesystems.
    Not sure about the bottleneck bit, but you are right it was designed for HDD era.

    Originally posted by mppix View Post
    I believe the most common home-NAS are 2 or 4 bay, where raid 5/6 does not really make sense.
    For 6 and 8 bay home-NAS, it is a bit of a different story. You may prefer size. However, my take would be that with the disk sizes in 2021, you may prefer raid 10, especially for SATA HDD. Then, you have at least a chance to saturate a 1GBe line (considering also the mediocre computation power of today's home-NAS).
    Also, I don't know if ZFS is that common for home-NAS with Synology (primarily) using BTRFS and qnap ext4.
    TBH if you have this 6-8 bay setup, RAID 5 (or zraid1) with an nvme SSD 1/2tb setup as Arc2 along with 32 gigs of RAM is going to give you much better performance, hard drive space and reliability compared to a RAID 10 setup.

    With such a low pool size, you are not going to get that much faster read speeds from a RAID 10 setup (if we are talking hard drives with random reads) compared to zraid1, you are just better off using an SSD cache (ergo level2 arc) which is going to be fine for a typical home media setup.

    Regarding home setup, TrueNAS core has gone a long way if you have commodity hardware otherwise you have stuff like https://www.truenas.com/truenas-mini/

    Originally posted by mppix View Post
    ZFS started as a product by a large company.
    BTRFS is a open-source project with free contributions. Crowd sourcing a project implies less "direction" and development is done publicly for everyone to see.
    I don't think Linux "desperately needs" either because you can get largely the same functionality with a "mdadm+LVM+ext4/xfs" stack that tends to outperform both of them.
    In context by desperate I meant I don't know why they rushed RAID5/6 so fast in BTRFS knowing that it was broken and you are dealing with a *filesystem* which is one of the few things you *dont* want to fail, especially a file-system that is designed for resiliency. If you are going to design a filesystem that by design its doing its best to make sure your data is safe then you should only push it into Linux tree when its achieved that goal.

    tl;dr filesystems like this should never be rushed, if you are using something like ext2-4 then you can maybe expect some data less in extreme circumstances but btrfs was blatantly advertised to be linux's solution for ZFS, in other words its not meant to be released with the glaring issues its historically had.
    Last edited by mdedetrich; 03 September 2021, 06:09 AM.

    Comment


    • #32
      Originally posted by mdedetrich View Post
      TBH if you have this 6-8 bay setup, RAID 5 (or zraid1) with an nvme SSD 1/2tb setup as Arc2 along with 32 gigs of RAM is going to give you much better performance, hard drive space and reliability compared to a RAID 10 setup.

      With such a low pool size, you are not going to get that much faster read speeds from a RAID 10 setup (if we are talking hard drives with random reads) compared to zraid1, you are just better off using an SSD cache (ergo level2 arc) which is going to be fine for a typical home media setup.

      Regarding home setup, TrueNAS core has gone a long way if you have commodity hardware otherwise you have stuff like https://www.truenas.com/truenas-mini/
      True.
      Thumbs up also for TrueNAS. I'm following the development of their Debain based scale version closely.

      Originally posted by mdedetrich View Post
      In context by desperate I meant I don't know why they rushed RAID5/6 so fast in BTRFS knowing that it was broken and you are dealing with a *filesystem* which is one o f the few things you *dont* want to fail, especially a file-system that is designed for resiliency. If you are going to design a filesystem that by design its doing its best to make sure your data is safe then you should only push it into Linux tree when its achieved that goal.

      tl;dr filesystems like this should never be rushed, if you are using something like ext2-4 then you can maybe expect some data less in extreme circumstances but btrfs was blatantly advertised to be linux's solution for ZFS, in other words its not meant to be released with the glaring issues its historically had.
      I agree. While I have started to use BTRFS in recent years and have grown quite fond of it - it's communications could have been definitely better in the early years where folks lost data by using it in ways that had received limited validation. Some of that could have been avoided by not exposing such functionality...
      I still have hopes that it eventually becomes the de-facto default and replacement of mdadm, lvm, and ext4. However, this is not really on the horizon, yet (encryption, etc..)

      Comment


      • #33
        Originally posted by mppix View Post

        I agree. While I have started to use BTRFS in recent years and have grown quite fond of it - it's communications could have been definitely better in the early years where folks lost data by using it in ways that had received limited validation. Some of that could have been avoided by not exposing such functionality...
        I still have hopes that it eventually becomes the de-facto default and replacement of mdadm, lvm, and ext4. However, this is not really on the horizon, yet (encryption, etc..)
        Yeah, the situation is pretty sad because if it wasn't due to the licensing issues ZFS would have been perfect candidate as a future filesytem on Linux (I mean you can currently use it but you have to load it as a kernel module). If it wasn't for this war between Sun and Linux in the early days then history might have been different, because for the time and what they created, Sun engineers were freaking brilliant and ZFS is a technical masterpiece that has stood the test of time (the only real issue ZFS has now is that its not too flexible when it comes to expanding currently existing pools, they are working on this slowly thankfully).

        Comment


        • #34
          Originally posted by mppix View Post

          SSDs are not 'that' predictable in failure. If they were, smart would tell you the date/time when they will fail..
          I use raid 1 or 10 with SSDs because it provides uptime if something goes wrong with a drive but replace them below a certain "health" status or TBW.
          Raid 5/6 is problematic because you put a huge load on the degraded array after disk failure. It has been shown that this can serve as a silver bullet (or catalyst if you will) for a second drive failure. This is true for SSD and large HDD.
          How do you check for SSD health status and is it any reliable? I'm asking, bc in my personal experience I've had multiple hard disks "getting wonky, throwing bad blocks, getting slow, getting loud, looking bad on SMART" which was indicator enough to check and change them in time and get like 99,9% of the data of them. I've had 2 SSD fail without any prior hitch and they instantly lost all data, permanently. In my experience, SSD simply are not as reliable as hard disks yet.

          Comment

          Working...
          X