Announcement

Collapse
No announcement yet.

Btrfs Restoring Support For Swap Files With Linux 4.21

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by waxhead View Post

    They did? I was under the impression that closing the write hole on all but raid0 requires a separate journal device, which means that by default it is not closed. I would appreciate if you can share a link to this! Thanks!
    Yes, I meant with a separate journal device. Suboptimal of course, but I don't think that this is an option with btrfs yet.
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

    Comment


    • #52
      Originally posted by starshipeleven View Post
      It's "fixed" in the same way ZFS fixed it, by adding the ability to use a journaling device.
      RAID-Z is designed to overcome the RAID-5 write hole error.: no need for external journaling devices.
      ## VGA ##
      AMD: X1950XTX, HD3870, HD5870
      Intel: GMA45, HD3000 (Core i5 2500K)

      Comment


      • #53
        Originally posted by jacob View Post

        Correct me if I'm wrong but I believe that only ZVOL metadata use CoW, the stored data themselves are overwritten, which is actually the very difference between a ZVOL and a normal ZFS file. There can still be on-demand data CoW, such as when a ZVOL is snapshotted. In effect these are exactly the same features as non-CoW files in BTRFS, without the ease of use.
        I'm pretty sure that ZVOL doesn't disable COW in any way, can you please share your sources?
        ## VGA ##
        AMD: X1950XTX, HD3870, HD5870
        Intel: GMA45, HD3000 (Core i5 2500K)

        Comment


        • #54
          Originally posted by jpg44 View Post

          i though this had already happened some time ago with Suse subvolumes. Btrfs is a very reliable filesystem, raid 5/6 is coming along well and should be production ready very soon. Btrfs is ready to replace Ext4 completely today., Ext4 never had Raid 5/6 in the first place
          I agree, I add that from my experience with Ext4 in Ubuntu when I copy or move a file from one partition to another I noticed a strange behavior, the graphic bar of progress in a short time comes almost to the end, but before the operation ends still a long time, this thing has never happened to me on Btrfs, where the progress bar is consistent with the actual copy. It may not be a problem with Ext4, but the DE graph, however, as mentioned, does not happen with Btrfs.

          Comment


          • #55
            Originally posted by darkbasic View Post

            RAID-Z is designed to overcome the RAID-5 write hole error.: no need for external journaling devices.
            The technique the ZFS uses for that is quite simple: it forces a read-back of the write immediately from every single drive in the array to verify that the write was successful. This is expensive and (relatively) slow, but does ensure your parity drives are in good shape. Btrfs could implement it, and I hope they will implement it as an option eventually.

            Comment


            • #56
              Nope, RAID modes are not fine yet with btrfs. I know they advertise for example RAID1 as stable, but that only extends to btrfs itself not causing any corruption. In case of any corruption you'd better stay away. One problem is that device replace won't work in case of IO errors, for example when specific sectors are corrupt on a disk (you cannot reconstruct "only" the intact parts). Another problem is when using single-disk degraded volumes (which is pretty common with raid1, when a disk of a 2-disk array fails), in which case your still operational disk will go into an irrecoverable read-only mode (meaning it stays there even if you replace the failed disk, and the only way to recover is to recreate the whole array from scratch, or to convert the whole array to a non-redundant volume). Raid56 is even in worse condition.

              As it stands I'd only recommend btrfs for non-raid configurations, in which case I'd choose btrfs over ext4 for its snapshotting and subvolume features. For production-ready raid configs of any kind, zfs is still the most bullet-proof.

              Comment


              • #57
                Originally posted by ultimA View Post
                Nope, RAID modes are not fine yet with btrfs. I know they advertise for example RAID1 as stable, but that only extends to btrfs itself not causing any corruption. In case of any corruption you'd better stay away. One problem is that device replace won't work in case of IO errors, for example when specific sectors are corrupt on a disk (you cannot reconstruct "only" the intact parts). Another problem is when using single-disk degraded volumes (which is pretty common with raid1, when a disk of a 2-disk array fails), in which case your still operational disk will go into an irrecoverable read-only mode (meaning it stays there even if you replace the failed disk, and the only way to recover is to recreate the whole array from scratch, or to convert the whole array to a non-redundant volume). Raid56 is even in worse condition.

                As it stands I'd only recommend btrfs for non-raid configurations, in which case I'd choose btrfs over ext4 for its snapshotting and subvolume features. For production-ready raid configs of any kind, zfs is still the most bullet-proof.
                A common misconception is that BTRFS offers RAID... it does not. It uses the RAID terminology since that the closest match you can get. If you look at the mailing list there has been several discussions (and suggestions) for how to fix this naming scheme into something more representative.

                So allow me to clean up the data (and metadata) storage profiles BTRFS use first and foremost...
                • SINGLE:
                  • Store only one copy of the data on any device
                • DUP:
                  • Store two copies (1x replica) on the data on the SAME device
                • RAID0:
                  • One copy striped over all available storage devices
                • RAID1:
                  • Two copies (1x replica) stored on any two DIFFERENT storage devices (so only two separate devices are used in this "RAID" configuration regardless of the number of disks - all drives will be utilized, but only one replica exists (2 copies))
                • RAID10:
                  • Two copies (1x replica) each stored on HALF the available storage devices.
                • RAID5:
                  • One copy of the data striped over all, but one storage device. The last storage device is used for parity data so it can reconstruct missing/damaged data
                • RAID6:
                  • One copy of the data striped over all, but two storage devices. The last storage devices are used for parity data so it can reconstruct missing/damaged data
                So if this is understood , then you will understand what BTRFS "RAID"1 really is and probably how nice it is that BTRFS has received patches for n-way mirroring e.g. RAID1 but copying the same data over MORE than two devices. As far as I know these patches are not mainlined yet and are probably experimental.

                Regarding device replace: BTRFS will happily move along reconstructing only the intact parts UNLESS it get's stuck on bad sectors. In which case you remove the failed device (like you would in any other RAID like setup) and reconstruct from the remaining storage device. BTRFS is better here since it will KNOW if the good storage device returns broken or good data unlike traditional RAID implementations. Remember that RAID only protect against DISK FAILURE , not DATA CORRUPTION!!!

                The fact that BTRFS on "RAID"1 went into irrecoverable READ ONLY (which still allowed you to recover your data, but was a major hassle since you had to recreate your filesystem) was fixed in 4.14 or so I think... (but do not quote me exactly on that kernel version). In any case the previous behavior gave you ONE chance to replace your device and be happy. If you missed that chance you was stuck in read only so yes, I agree this WAS an annoyance (never hit it myself tough).

                And "RAID"5/6 is not ready in BTRFS , but lots of nice things have happened and the other "RAID" modes work fine as long as you understand that it is not really RAID, and as long as you know how set up and manage BTRFS properly.

                http://www.dirtcellar.net

                Comment


                • #58
                  You are belittling the problem of not being able to recover in case of IO errors, because you are assuming the IO errors happen on the disk you'd be replacing anyway. Often old rotational disks get gradually get worse and worse, with noticeably increasing bad sector count. So, especially if you've bought your disks as a pair, by the time one of your disks has died, it is not unlikely the other also has some bad sectors already. Which means with btrfs-raid1 you're fu***d. I'd expect that once a disk has failed, then even if the second one has some bad sectors, I should still be able to recover all other sectors once I replace the failed disk. It is unreasonable to loose all my data just because the thumbnail cache of my photo album got corrupted... So IMHO this is a major problem with btrfs right now.

                  If the "getting-stuck-in-ro-mode" has been fixed since I last tried (about a year ago maybe, I'm not sure, but it wasn't very long ago), I'm glad to hear it. I'm actually then itching to try it out again, as I'd be interested in using it. I do think it it is a real blocker and more than just a simple annoyance if it still existed, as it means you need extra storage for all your data to be able to recover. I know, RAID is no replacement for a backup, but let's be honest and face it, few users have the luxury to keep complete backups of all data, let alone backups that are always up to date. EDIT: I'll also add, even if this got fixed sometime in 2018, most users rely on standard kernels shipped by distributions, so many probably don't have the fix yet. Unless you're on a rolling release.
                  Last edited by ultimA; 14 December 2018, 07:37 PM.

                  Comment


                  • #59
                    Originally posted by ultimA View Post
                    You are belittling the problem of not being able to recover in case of IO errors, because you are assuming the IO errors happen on the disk you'd be replacing anyway. Often old rotational disks get gradually get worse and worse, with noticeably increasing bad sector count. So, especially if you've bought your disks as a pair, by the time one of your disks has died, it is not unlikely the other also has some bad sectors already. Which means with btrfs-raid1 you're fu***d. I'd expect that once a disk has failed, then even if the second one has some bad sectors, I should still be able to recover all other sectors once I replace the failed disk. It is unreasonable to loose all my data just because the thumbnail cache of my photo album got corrupted... So IMHO this is a major problem with btrfs right now.
                    I do not agree, if you use a standard RAID you will get corruptions, not knowing where (if both sets are broken). If you use BTRFS you will know for sure that the data you get back is correct which is a big win in my opinion. You can always use the restore program bundled with BTRFS to recover in any case and you would not be worse of than "regular RAID". So I don't agree that you are "rhythmically violated from behind" with BTRFS, in fact you are much safer and you know what you get back is correct.

                    Originally posted by ultimA View Post
                    If the "getting-stuck-in-ro-mode" has been fixed since I last tried (about a year ago maybe, I'm not sure, but it wasn't very long ago), I'm glad to hear it. I'm actually then itching to try it out again, as I'd be interested in using it. I do think it it is a real blocker and more than just a simple annoyance if it still existed, as it means you need extra storage for all your data to be able to recover. I know, RAID is no replacement for a backup, but let's be honest and face it, few users have the luxury to keep complete backups of all data, let alone backups that are always up to date. EDIT: I'll also add, even if this got fixed sometime in 2018, most users rely on standard kernels shipped by distributions, so many probably don't have the fix yet. Unless you're on a rolling release.
                    You're right , it is not that long ago since that was fixed. And yes, I agree that it is a blocker for some setups. I for example have a server with only room for 2 disks. In this case it was a "ouch - if this ever fails" scenario.
                    If you don't have backups - luxury or not - your data is not important to you. It's that simple! What you don't have a (working) backup of is not something you are afraid to loose. I may be wrong, but I *think* the "RAID"1 stuff was backported to 4.9 , but don't quote me on that!

                    http://www.dirtcellar.net

                    Comment


                    • #60
                      Originally posted by waxhead View Post
                      I do not agree, if you use a standard RAID you will get corruptions, not knowing where (if both sets are broken). If you use BTRFS you will know for sure that the data you get back is correct which is a big win in my opinion. You can always use the restore program bundled with BTRFS to recover in any case and you would not be worse of than "regular RAID". So I don't agree that you are "rhythmically violated from behind" with BTRFS, in fact you are much safer and you know what you get back is correct.
                      Depends on the point of view. While I agree it is better than standard raid, it does not mean BTRFS is the best choice. ZFS provides the same data-correctness guarantees as BTRFS that you described (all the advantages compared to standard RAID), without any of the troubles/risks concerning IO-errors or device replacement as they can happen with BTRFS. So I'm just saying, if you'd be using btrfs in a raid-1 (or any other kind of raid) setup, users are currently better off and much safer with ZFS. I'm specifically referring to the ZFS-on-Linux port.

                      I will note though that BTRFS is not inferior to ZFS by design. On the contrary, once the devs iron out the bugs and missing (but planned) features of the BTRFS code, it will be better than ZFS in many ways. They just haven't reached that point yet, and unfortunately they seem to be progressing rather slowly (though steadily).

                      Comment

                      Working...
                      X