Announcement

Collapse
No announcement yet.

Btrfs Adds Degenerate RAID Support, Performance Improvements With Linux 5.15

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mdedetrich View Post
    TBH if you have this 6-8 bay setup, RAID 5 (or zraid1) with an nvme SSD 1/2tb setup as Arc2 along with 32 gigs of RAM is going to give you much better performance, hard drive space and reliability compared to a RAID 10 setup.

    With such a low pool size, you are not going to get that much faster read speeds from a RAID 10 setup (if we are talking hard drives with random reads) compared to zraid1, you are just better off using an SSD cache (ergo level2 arc) which is going to be fine for a typical home media setup.

    Regarding home setup, TrueNAS core has gone a long way if you have commodity hardware otherwise you have stuff like https://www.truenas.com/truenas-mini/
    True.
    Thumbs up also for TrueNAS. I'm following the development of their Debain based scale version closely.

    Originally posted by mdedetrich View Post
    In context by desperate I meant I don't know why they rushed RAID5/6 so fast in BTRFS knowing that it was broken and you are dealing with a *filesystem* which is one o f the few things you *dont* want to fail, especially a file-system that is designed for resiliency. If you are going to design a filesystem that by design its doing its best to make sure your data is safe then you should only push it into Linux tree when its achieved that goal.

    tl;dr filesystems like this should never be rushed, if you are using something like ext2-4 then you can maybe expect some data less in extreme circumstances but btrfs was blatantly advertised to be linux's solution for ZFS, in other words its not meant to be released with the glaring issues its historically had.
    I agree. While I have started to use BTRFS in recent years and have grown quite fond of it - it's communications could have been definitely better in the early years where folks lost data by using it in ways that had received limited validation. Some of that could have been avoided by not exposing such functionality...
    I still have hopes that it eventually becomes the de-facto default and replacement of mdadm, lvm, and ext4. However, this is not really on the horizon, yet (encryption, etc..)

    Comment


    • #32
      Originally posted by mppix View Post

      I agree. While I have started to use BTRFS in recent years and have grown quite fond of it - it's communications could have been definitely better in the early years where folks lost data by using it in ways that had received limited validation. Some of that could have been avoided by not exposing such functionality...
      I still have hopes that it eventually becomes the de-facto default and replacement of mdadm, lvm, and ext4. However, this is not really on the horizon, yet (encryption, etc..)
      Yeah, the situation is pretty sad because if it wasn't due to the licensing issues ZFS would have been perfect candidate as a future filesytem on Linux (I mean you can currently use it but you have to load it as a kernel module). If it wasn't for this war between Sun and Linux in the early days then history might have been different, because for the time and what they created, Sun engineers were freaking brilliant and ZFS is a technical masterpiece that has stood the test of time (the only real issue ZFS has now is that its not too flexible when it comes to expanding currently existing pools, they are working on this slowly thankfully).

      Comment


      • #33
        Originally posted by mppix View Post

        SSDs are not 'that' predictable in failure. If they were, smart would tell you the date/time when they will fail..
        I use raid 1 or 10 with SSDs because it provides uptime if something goes wrong with a drive but replace them below a certain "health" status or TBW.
        Raid 5/6 is problematic because you put a huge load on the degraded array after disk failure. It has been shown that this can serve as a silver bullet (or catalyst if you will) for a second drive failure. This is true for SSD and large HDD.
        How do you check for SSD health status and is it any reliable? I'm asking, bc in my personal experience I've had multiple hard disks "getting wonky, throwing bad blocks, getting slow, getting loud, looking bad on SMART" which was indicator enough to check and change them in time and get like 99,9% of the data of them. I've had 2 SSD fail without any prior hitch and they instantly lost all data, permanently. In my experience, SSD simply are not as reliable as hard disks yet.

        Comment

        Working...
        X