Announcement

Collapse
No announcement yet.

Btrfs Gets Fixes For Linux 4.9, Linux 4.10 To Be More Exciting

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by rubdos View Post
    Also, are those RAID5/6 issues fixed yet? I love the btrfs soft RAID idea, but it made me very afraid.
    RAID5/6 has not seen much development so no, it is still NOT usable. Btw, it's not "RAID5/6 issues" it's "RAID5/6 isn't developed yet and someone wrote unclear bullshit on the wiki".
    Now they fixed the wiki at least, and they have a table with the features that work and those that don't.

    Comment


    • #42
      unapproved post for rubdos above.

      Comment


      • #43
        Originally posted by starshipeleven View Post
        RAID5/6 has not seen much development so no, it is still NOT usable. Btw, it's not "RAID5/6 issues" it's "RAID5/6 isn't developed yet and someone wrote unclear bullshit on the wiki".
        Now they fixed the wiki at least, and they have a table with the features that work and those that don't.
        It's not actually that bad. I have one test server which uses btrfs/raid5 (4x2TB) for faster snapshots. Uptime is already 57 days with 4.7.0 kernel and before that I've updated it with a new kernel every 80-90 days or so since last winter. It runs scrub once per week. Zero corruption so far. I might guess it probably won't survive a hardware fail, but it won't corrupt data or anything serious. Reiser4 used to be a lot worse.

        Comment


        • #44
          Originally posted by AndyChow View Post



          Link didn't work, "----it" at the end. Very interesting slides.

          The most shocking thing about such an unstable FS, is that it's not an after school project. This is an Oracle, Facebook, Fujitsu, SUSE, etc. project, and it's still half-baked.
          What btrfs is doing is VERY complex, so it's not too surprising.
          What I'd like to know are details about the btrfs fs that they tested.
          Saying it's half-baked seems a bit hyperbolic,imho.

          Comment


          • #45
            Originally posted by caligula View Post
            It's not actually that bad. I have one test server which uses btrfs/raid5 (4x2TB) for faster snapshots. Uptime is already 57 days with 4.7.0 kernel and before that I've updated it with a new kernel every 80-90 days or so since last winter. It runs scrub once per week. Zero corruption so far. I might guess it probably won't survive a hardware fail, but it won't corrupt data or anything serious. Reiser4 used to be a lot worse.
            Hello? the code for checksumming in RAID5/6 sucks ass (plenty of random hiccups causing filesystem crashes), parity isn't even checksummed so the first time a checksum fails the whole RAID is gone.

            Really, no. Just no.

            Comment


            • #46
              Originally posted by Serafean View Post
              I'd actually like to use RAID5/6. I have quite a bit of data, I do have off-site backups, but those are a pain to use, and I don't havethe capacity to use RAID1...
              Yea, but I consider that a bug, not a shiny new feature. That's my point, they need to focus on that now, not new features.

              Comment


              • #47
                Originally posted by dcrdev View Post
                For me something that's currently making me contemplate switching back to xfs/mdadm is that btrfs doesn't seem to play nicely with systemd one bit - in particular raid setups, due to multiple disks having the same uuid systemd often complains and maybe 50% of the time hangs at boot; not something that is ideal with a server being managed from another location.

                There is an ongoing bug filed here: https://bugzilla.redhat.com/show_bug.cgi?id=1354131 but no one seems to care about it.
                Uh, what? I'm running Btrfs in RAID1 and all the partitions have their own UUIDs. I have no idea why yours don't. (I hope you didn't block-copy one to the other or something; that's a very bad idea.)

                Comment


                • #48
                  Originally posted by liam View Post
                  It's still not very solid.
                  It was the first fs to fail in the filesystem fuzz testing a few months ago (https://events.linuxfoundation.org/s...6_0.pdf)----it lasted 5secs. For comparison xfs lasted, iirc, 1.5hrs, and ext4 2hrs(the longest).
                  Now, I'm not too worried about that because I'd guess it got stuck somewhere that doesn't get much testing and I make it a point to use safe, recommended, settings.
                  It still has a number of features it needs to get working properly before I'd consider ENOUGH (proper autodefrag---according to tests run by ceph users the latest autodefrag still doesn't work nearly as well as a simple defrag daemon and stable raid with closed write-hole).
                  "Getting", I said. That's exactly my point; they don't need to do any new features. Bugfix releases like this are the best thing, and very exciting, right now.

                  Comment


                  • #49
                    Originally posted by Enverex View Post
                    Why the hell can't I quote multiple people on this forum? It seems to get stuck on the one I first clicked, horrible. Anyway...
                    Looks like it works per-page, but not across pages. That's weird...

                    Originally posted by rubdos View Post
                    They finally do auto balancing? Awesome.

                    Also, are those RAID5/6 issues fixed yet? I love the btrfs soft RAID idea, but it made me very afraid.
                    I don't know if they do it yet, but it was the plan to solve the ENOSPC issue. My point is that this is the kind of things that people should find exciting right now.

                    And no, RAID5/6 is not solved yet, but I'm running on RAID1 and so far it's been pretty good. There were some strange free space issues at the beginning, but then I reformatted the drives and it works fine now.

                    Comment


                    • #50
                      Originally posted by GreatEmerald View Post

                      "Getting", I said. That's exactly my point; they don't need to do any new features. Bugfix releases like this are the best thing, and very exciting, right now.
                      Assuming the enospc nonsense (I'm not saying it isn't tricky but they knew that this was a big problem with cow from the beginning, and reflink didn't help the situation) is truly behind us and they don't have to change the ondisk format again, yeah, i think that's reasonable

                      Comment

                      Working...
                      X