Announcement

Collapse
No announcement yet.

SUSE Reworking Btrfs File-System's Locking Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by moilami View Post

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    They have official stalkers observing people through peep-holes. I don't like the idea that poor volunteer developers are being harassed by fundamentalist snow flakes. I can't do anything about it except change distro I use and install around.
    dear lord, please keep this poison clear of the opensuse community!

    Comment


    • #12
      Well, it's about damn time.... Maybe when the evaluate this locking code they'll finally figure out how to fix disk layout corruption.... It's come a decade too late, waaaayyy too many people have already had horrible experience with it and now it's reputation is so badly tarnished it'll never recover.

      It's desperately way past due to kill BTRFS and its guaranteed disk corruption with fire and brimstone. We desperately need a new CoW FS with an -actually- stable disk layout. Time for FS devs to get coding.....

      Comment


      • #13
        Originally posted by duby229 View Post
        Well, it's about damn time.... Maybe when the evaluate this locking code they'll finally figure out how to fix disk layout corruption.... It's come a decade too late, waaaayyy too many people have already had horrible experience with it and now it's reputation is so badly tarnished it'll never recover.

        It's desperately way past due to kill BTRFS and its guaranteed disk corruption with fire and brimstone. We desperately need a new CoW FS with an -actually- stable disk layout. Time for FS devs to get coding.....
        Will this about damn time that when people read comments like this they start to think for themselves and finally figure out that btrfs is in fact a very robust filesystem. It does have some week points but it is all documented. Let us know what disk layout corruption issues you are referring to pretty please with sugar on top.


        http://www.dirtcellar.net

        Comment


        • #14
          Btrfs saved me several times from disk failures and I trust it more than other file systems, especially when it gets constant improvements like this.

          Comment


          • #15
            Originally posted by boxie View Post
            I had my first bad btrfs experience recently. 4 disk raid 5
            Please, keep in mind that according to the official status RAID5/6 (the parity-based ones) is not officially considered stable. (Further reading about RAID5/6 available in the wiki).
            Eventually, it will get there, specially given companies likes OpenSUSE (and Facebook, etc.) investing resources into it.
            But that peculiar feature is not there yet.

            It would be like using XFS (because it's stable) and then complaining because a mess ensued because you turned on a "they barely started this experiment yet" feature such as copy-on-write snapshotting.

            Originally posted by boxie View Post
            and 4 disk raid10 - 1tb SMR drives - access latency was > 200ms (which is obviously not cool) - had to drop back to a 4 disk md raid 10 with ext4.
            Also keep the whole point of BTRFS is to have a whole layer handling several key features like CoW/snapshotting and systematic CRC checking of every bit of data (not only metadata) (and also optionnally compression).

            It's similar to running your EXT4 with an LVM layer in the middle with snapshotting enabled (except that BTRFS has a bit better performance in that regard).

            Also, currently load-balancing isn't optimal between RAID1 copies in BTRFS (it's still a very primitive PID-base system)

            If having the fastest best performance is the most important to you, you should indeed stick to more-to-the-bare-metal FS such as EXT4.

            (Though keep in mind that in-place overwriting isn't necessarily the best for shingled drive - nor flash for what matter. You'd might be having a bit more success with some logstructured or another CoW filesystem. Compare with, e.g.: F2FS)

            (Or switch to flash. SMR aren't really supposed to be used for performance, mostly for large volume archival - they are better used as "seldom written" drivers. At which point using a filesystem with CRC actually *makes* sense.)

            Originally posted by duby229 View Post
            Well, it's about damn time.... Maybe when the evaluate this locking code they'll finally figure out how to fix disk layout corruption.... {...}
            It's desperately way past due to kill BTRFS and its guaranteed disk corruption with fire and brimstone.
            Disk corruption will mostly happen when you treat it as a regular in-place overwrite filesystem (which can leave the system in an inconsistent state) and try to run fsck (whose main job normally is to make sense out of inconsistent data, optionally with the help of a log journal) to "repair it". It will only corrupt it further.

            "fsck" doesn't make sens on a CoW FS (like BTRFS) or log-structured one. There is always by definition an older copy that is still consistent, and can usually be mounted and accessed with the corresponding recovery mount option.
            If that fails, you can still use the btrfs restore to get back as much as possible from your data from an unmounted block device.

            The only thing that kills BTRFS is hardware failure, which would kill any other file system too (and in this case, again, use btrfs restore to get as much as possible out of it. Whereas with an in-place overwrite FS, the proper procedure would be to make a perfect DD copy, and run fsck on that, and falling backing to photorec to carve out any lost file).

            If you're afraid of hardware failure, use proper redundancy (and again, BTRFS' own RAID56 isn't stable, use MDADM instead for now).

            And always backup (which is something for which CoW filesystem help simplify a lot, by using snapshot (opensuse's "snapper" tools simplifies a lot) to make history instead of complex "rsync with reference copies and symlinks/hardlinks". CRC on backups are also a reassuring thing)

            Originally posted by R41N3R View Post
            Btrfs saved me several times from disk failures and I trust it more than other file systems, especially when it gets constant improvements like this.
            Just saved me again from a SD card corruption a couple of weeks ago.

            Comment


            • #16
              Originally posted by DrYak View Post

              Please, keep in mind that according to the official status RAID5/6 (the parity-based ones) is not officially considered stable. (Further reading about RAID5/6 available in the wiki).
              Eventually, it will get there, specially given companies likes OpenSUSE (and Facebook, etc.) investing resources into it.
              But that peculiar feature is not there yet.

              It would be like using XFS (because it's stable) and then complaining because a mess ensued because you turned on a "they barely started this experiment yet" feature such as copy-on-write snapshotting.
              Oh, I know that there is a chance that it might eat the data, that's not what I am complaining about.

              I have a 5 disk raid5 in another machine, and it works flawlessly on the same type of drives - but - it has a much different access pattern.
              The Raid10 setup also exhibited the same bad access latency.

              The common denominator is the SMR drives. I am guessing btrfs has not had tuning done on it yet for these drives

              Comment


              • #17
                Originally posted by duby229 View Post
                Well, it's about damn time.... Maybe when the evaluate this locking code they'll finally figure out how to fix disk layout corruption....
                wtf is even "disk layout corruption" anyway.

                Comment


                • #18
                  Originally posted by DrYak View Post

                  Please, keep in mind that according to the official status RAID5/6 (the parity-based ones) is not officially considered stable. (Further reading about RAID5/6 available in the wiki).
                  Eventually, it will get there, specially given companies likes OpenSUSE (and Facebook, etc.) investing resources into it.
                  But that peculiar feature is not there yet.

                  It would be like using XFS (because it's stable) and then complaining because a mess ensued because you turned on a "they barely started this experiment yet" feature such as copy-on-write snapshotting.



                  Also keep the whole point of BTRFS is to have a whole layer handling several key features like CoW/snapshotting and systematic CRC checking of every bit of data (not only metadata) (and also optionnally compression).

                  It's similar to running your EXT4 with an LVM layer in the middle with snapshotting enabled (except that BTRFS has a bit better performance in that regard).

                  Also, currently load-balancing isn't optimal between RAID1 copies in BTRFS (it's still a very primitive PID-base system)

                  If having the fastest best performance is the most important to you, you should indeed stick to more-to-the-bare-metal FS such as EXT4.

                  (Though keep in mind that in-place overwriting isn't necessarily the best for shingled drive - nor flash for what matter. You'd might be having a bit more success with some logstructured or another CoW filesystem. Compare with, e.g.: F2FS)

                  (Or switch to flash. SMR aren't really supposed to be used for performance, mostly for large volume archival - they are better used as "seldom written" drivers. At which point using a filesystem with CRC actually *makes* sense.)



                  Disk corruption will mostly happen when you treat it as a regular in-place overwrite filesystem (which can leave the system in an inconsistent state) and try to run fsck (whose main job normally is to make sense out of inconsistent data, optionally with the help of a log journal) to "repair it". It will only corrupt it further.

                  "fsck" doesn't make sens on a CoW FS (like BTRFS) or log-structured one. There is always by definition an older copy that is still consistent, and can usually be mounted and accessed with the corresponding recovery mount option.
                  If that fails, you can still use the btrfs restore to get back as much as possible from your data from an unmounted block device.

                  The only thing that kills BTRFS is hardware failure, which would kill any other file system too (and in this case, again, use btrfs restore to get as much as possible out of it. Whereas with an in-place overwrite FS, the proper procedure would be to make a perfect DD copy, and run fsck on that, and falling backing to photorec to carve out any lost file).

                  If you're afraid of hardware failure, use proper redundancy (and again, BTRFS' own RAID56 isn't stable, use MDADM instead for now).

                  And always backup (which is something for which CoW filesystem help simplify a lot, by using snapshot (opensuse's "snapper" tools simplifies a lot) to make history instead of complex "rsync with reference copies and symlinks/hardlinks". CRC on backups are also a reassuring thing)



                  Just saved me again from a SD card corruption a couple of weeks ago.
                  Um, just run the balance command..... That's literally all it takes......

                  EDIT: "You said eventually it will get there, but I have to wonder how many more decades will that take?
                  Last edited by duby229; 10 June 2019, 06:54 AM.

                  Comment


                  • #19
                    Originally posted by starshipeleven View Post
                    wtf is even "disk layout corruption" anyway.
                    Where do I even start.....

                    I know, lets start with dd.... go ahead dd a btrfs disk and see if it works anywhere else..... Try it.....After you -actually- try it we can start a real conversation about lvm.
                    Last edited by duby229; 10 June 2019, 07:03 AM.

                    Comment


                    • #20
                      Originally posted by duby229 View Post
                      Where do I even start.....

                      I know, lets start with dd.... go ahead dd a btrfs disk and see if it works anywhere else..... Try it.....After you -actually- try it we can start a real conversation about lvm.
                      I clone my btrfs drives all the time with dd. (technically I do it with pv, but that's just another tool that does raw bit-by-bit copy)
                      There is a limitation when you do raw copies but it is not "it will not work anywhere else". Please be more specific so I know you know.
                      Last edited by starshipeleven; 10 June 2019, 08:54 AM.

                      Comment

                      Working...
                      X