Announcement

Collapse
No announcement yet.

Btrfs Will Finally "Strongly Discourage" You When Creating RAID5 / RAID6 Arrays

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by mazumoto View Post
    This is unfortunate ... a few years ago I got the impression that most problems were fixed (beside the write hole). I remember them changing the status for raid56 on their wiki from "experimental" to "mostly safe" or similar.
    I actually run a raid5 array of 5 disks for a few years now - because I cannot afford 3 more disks for raid1. So I definitely hope they keep (or start) working on raid56 again
    From one person who can't afford all the disks they like to another, you should look into migrating to ZFS. I've been rocking the same ZFS setup for many years on multiple different disks now.

    Code:
    [FONT=monospace][COLOR=#54ff54][B]skeevy420@CygnusX1[/B][/COLOR][COLOR=#5454ff][B] ~ $[/B][/COLOR][COLOR=#000000] lsblk -f [/COLOR]
    NAME   FSTYPE     FSVER LABEL      UUID                                 FSAVAIL FSUSE% MOUNTPOINT
    sda                                                                                     
    ├─sda1 vfat       FAT32                                         251.8M     0% /boot/efi
    └─sda2 btrfs            GENTOO       439.3G     2% /var
    sdb                                                                                     
    ├─sdb1 zfs_member 5000  multimedia                                 
    └─sdb9                                                                                  
    sdc                                                                                     
    ├─sdc1 zfs_member 5000  multimedia                                 
    └─sdc9                                                                                  
    sdd                                                                                     
    ├─sdd1                                                                                  
    ├─sdd2 ntfs                                                            
    └─sdd3 ntfs                         [/FONT]
    The SDD disk is a 2TB HDD and currently holds my Windows 10 install. 6 or 7 years ago I set it up as a ZFS data disk with 1.5TB for ZFS and .5TB for NTFS. 4 years ago I removed the NTFS partition and expanded it to the full 2TB. No problems at all. I quit dual booting for a few years. Went Linux Only.

    Year before last I cloned SDD to the 4TB HDD SDB and added SDC to it a little after that to mirror it up and add some much, much needed redundancy. SDD became Windows 10. Epic games was giving out GTAV for free. Yeah. I know...

    When I built my current system I didn't realize I broke an old SATA cable, but my ZFS mirror kept on chugging along even though one disk was giving iffy ass reporting and in a degraded state. When I realized it was the cable and replaced it my mirror went into a fully working state just like that.

    While I'm currently on a Gentoo (gentoox) setup now, I've used that same ZFS disk to mirror on Fedora, Arch, Manjaro, Ubuntu, SUSE, and more.

    Y'all want to hear some shit: SDD is over 15 years old. It started its life as a 2TB external HDD that I used to take into the city and mirror the entire Debian repo, apt-mirror, because I lived around 200 yards from where the cable network ended. When I finally got broadband it became in internal disk. It has been with me for 4 different computers now.

    Comment


    • #12
      Originally posted by King InuYasha View Post

      This is only true if you're willing to make a dedicated journal device. Otherwise it still has the write hole issue.
      Incorrect, mdraid has an option called consistency policy which allows for a partial parity log in the metadata and does not require a dedicated journal device.

      http://www.dirtcellar.net

      Comment


      • #13
        Originally posted by mazumoto View Post
        This is unfortunate ... a few years ago I got the impression that most problems were fixed (beside the write hole). I remember them changing the status for raid56 on their wiki from "experimental" to "mostly safe" or similar.
        I actually run a raid5 array of 5 disks for a few years now - because I cannot afford 3 more disks for raid1. So I definitely hope they keep (or start) working on raid56 again
        I think you are thinking about "scrub+raid56" and not "raid56" itself. From what I can remember raid56 has never been declared as mostly ok at all. If you are on metadata=raid1c3 or raid1c4 and data=raid5 or raid6 you should be "ok" if you, yourself make sure to run scrub on an unclean restart, and have a tested, working backup.

        If you can not afford more disks for RAID1/10 you should ask yourself if you can afford loosing your data, and start saving as soon as you can. Any filesystem or RAID like configuration is NOT a substitute for backup. Even users of ZFS that are realistic about it probably have backups.

        http://www.dirtcellar.net

        Comment


        • #14
          Originally posted by waxhead View Post

          I think you are thinking about "scrub+raid56" and not "raid56" itself. From what I can remember raid56 has never been declared as mostly ok at all. If you are on metadata=raid1c3 or raid1c4 and data=raid5 or raid6 you should be "ok" if you, yourself make sure to run scrub on an unclean restart, and have a tested, working backup.

          If you can not afford more disks for RAID1/10 you should ask yourself if you can afford loosing your data, and start saving as soon as you can. Any filesystem or RAID like configuration is NOT a substitute for backup. Even users of ZFS that are realistic about it probably have backups.
          On a slightly different note, I'm wondering if the above applies to HDD, SSD, or both (or if there are any different considerations there).

          Comment


          • #15
            Originally posted by vladpetric View Post

            On a slightly different note, I'm wondering if the above applies to HDD, SSD, or both (or if there are any different considerations there).
            raid with ssds is always problematic as normal (and zfs) raid implementations tend to write the same amounts of data to every device. if you start a new array with the same new devices they will fail together.

            thats not true for snapraid or - to some extent - btrfs.

            Comment


            • #16
              Originally posted by vladpetric View Post

              On a slightly different note, I'm wondering if the above applies to HDD, SSD, or both (or if there are any different considerations there).
              Actually there are quite a few things to consider. Things used to be simple(r). HDD's where mostly HDD's (ignoring SCSI,IDE,SAS,SATA,etc...). These days we got regular HDD's , advanced format HDD's, SMR HDD's which act more like SSD's and of course SSD's. All of these have fairly fancy firmware with it own share of bugs, large caches with memory that can go bad like any other memory and of course those modern NVMe, M2 and all kinds of weird stuff.
              In theory it should all work the same... but who knows. I long for the good old days when men where men and harddrives where filled with women

              http://www.dirtcellar.net

              Comment


              • #17
                I don't trust raid 1 on btrfs either, I remember a huuuge list of issues mentioned on the mailing list regarding its implementation. Most of them could be circumnvented if the user is aware of them, but since they don't care about documentation you cannot truly expect anyone to know all the pitfalls. I use btrfs only in non-raid configurations, for everything else I stick with zfs.

                P.S.
                The write hole is the least of the problems in raid5.
                ## VGA ##
                AMD: X1950XTX, HD3870, HD5870
                Intel: GMA45, HD3000 (Core i5 2500K)

                Comment


                • #18
                  Originally posted by flower View Post

                  why not raid6?
                  Three years ago I warned that RAID 5 would stop working in 2009. Sure enough, no enterprise storage vendor now recommends RAID 5. Now it's RAID 6, which protects against 2 drive failures. But in 2019 even RAID 6 won't protect your data. Here's why.

                  Comment


                  • #19
                    Originally posted by RussianNeuroMancer View Post
                    yes, i know.
                    thats why i asked why he only wanted warning questions for raid5 and not for raid6

                    Comment


                    • #20
                      Do these bugs only exist when using software raid? What if you have a hardware raid5 controller?

                      Comment

                      Working...
                      X