Announcement

Collapse
No announcement yet.

Btrfs On Ubuntu Is Running Well

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Brane215 View Post
    I don't see any key advantega of btrfs over ext4 that would make me switch now.
    And it is not clear how can it scale better than existing mdraid+lvm solutions.
    Compared to ext4, btrfs can do 16TB+ volumes, unlike the standard version of ext4 that ships with most distros.
    I've used MDADM and MDADM+LVM myself for years, and it does work well. LVM even has some practical advantages such as being able to expose raw block devices for iscsi mounts, etc. Btrfs cannot do that, as subvolumes are not partitions, but one could use a file instead of an LVM block device instead. You may have to adjust your work process a bit, but btrfs is just infinitely more flexible overall.

    I run my RAID-5/6 arrays for quite some time without much trouble. During that time i restripped and expanded them a few times, without bad consequnces.
    I have many software raid arrays that have run flawlessly for years, and have replaced many disks without issue. I really do like and trust MDADM. However, btrfs is just so much simpler for managing arrays. You can mix and match different size disks, and also expand and shink arrays while they're online (zfs cannot do this). Working with logical volumes + ext4 gives you the ability to expand, but not shrink.

    I'll give you another extreme example: Lets say you have a 12 disk x 1TB RAID array. If you wanted to upgrade to 2TB disks, with btrfs you could just remove a few disks (space permitting), and add a few bigger ones. This would be time consuming, but it's possible to upgrade an entire array like this without downtime. With MDADM, it's possible but significantly more difficult as the original disk sizes are stored in the metadata. It takes some... work, so it's usually just easier to build a new array and copy the data over.

    With RAID-5 i know that i have one equivalent of one drive for checksums.
    The difference between tradition RAID5 and btrfs, is that the checksumming is done on a per-file/per-metadata level instead of on a per-stripe level. There are a lot of articles online about the RAID5 write hole, and how checksummed file systems like btrfs can help avoid it.


    In my opinion the main thing holding btrfs back from production use at the DC, is userspace tools. The btrfs utilities are very simple for working with arrays, but there is not yet a lot of automation for dealing with removing bad disks from arrays, or working with hot-spares, etc. I hoping this comes soon.

    Comment


    • #22
      Originally posted by benmoran View Post
      Compared to ext4, btrfs can do 16TB+ volumes, unlike the standard version of ext4 that ships with most distros.
      I have 15TB ext4 partition on one server for some time now and it runs without a problem. I don't have a feeling that I would run into problem if I decided to expand it further. ATM it's made of 9 2TB drives.

      I'll give you another extreme example: Lets say you have a 12 disk x 1TB RAID array. If you wanted to upgrade to 2TB disks, with btrfs you could just remove a few disks (space permitting), and add a few bigger ones. This would be time consuming, but it's possible to upgrade an entire array like this without downtime. With MDADM, it's possible but significantly more difficult as the original disk sizes are stored in the metadata. It takes some... work, so it's usually just easier to build a new array and copy the data over.
      Insignificant corner case. What would I gain with a couple of bigger disks in redundant RAID, outside of trivial cases ? If I'm running RAID-5, I need to know that my data is distributed on all drives with parity, so no matter where it is:

      - one faulty drive won't kill me
      - i can count on transfer performance of RAID-5 with N drives


      The difference between tradition RAID5 and btrfs, is that the checksumming is done on a per-file/per-metadata level instead of on a per-stripe level. There are a lot of articles online about the RAID5 write hole, and how checksummed file systems like btrfs can help avoid it. ]
      Extra checksums are nice, but as ortogonal addition to RAID parity, not as replacement.


      In my opinion the main thing holding btrfs back from production use at the DC, is userspace tools. The btrfs utilities are very simple for working with arrays, but there is not yet a lot of automation for dealing with removing bad disks from arrays, or working with hot-spares, etc. I hoping this comes soon.
      This alone is reason enough to keep away from it as solution that could replace RAID infrastructure. How could one seriously use such solution without maintenance tools ?

      Comment


      • #23
        At least for me, F2FS is much more exciting news than BTRFS.

        F2FS fits nice niche - being efficient with simple FLASH storage. previously I used NILFS2 for booting from USB key, but it had its fatal shortcommings. NILFS2 works wonders on even cheapest USB sticks, but man, wqhen you trigger update storm on it, it's a tornado...

        F2FS has some compromises and soltions to that end and so far I like it. It too is not really great for maintenance tools, but OTOH no one is proposing it as RAID killer all over datacenters of the world and if fecal matter collides with the air turbine, USB stick can be reformatted and written again...

        Comment


        • #24
          Originally posted by Brane215 View Post
          I have 15TB ext4 partition on one server for some time now and it runs without a problem. I don't have a feeling that I would run into problem if I decided to expand it further. ATM it's made of 9 2TB drives.
          You can't expand further than 16TB, as that is the limit for the version of ext4utils that ships with all distributions. A version that supports more than
          16TB exists, but it's not widely tested - even less so than btrfs.

          Insignificant corner case. What would I gain with a couple of bigger disks in redundant RAID, outside of trivial cases ? If I'm running RAID-5, I need to know that my data is distributed on all drives with parity, so no matter where it is:
          You missed my point. All I was trying to say was that with btrfs you can easily cycle out ALL of the drives in the array with bigger drives, thereby resulting in a bigger array. Expanding array storage is not a corner case, it's a main issue in storage.

          - one faulty drive won't kill me
          - i can count on transfer performance of RAID-5 with N drives
          Extra checksums are nice, but as ortogonal addition to RAID parity, not as replacement.
          From these comments, it's now apparent why you don't understand why btrfs is interesting. It sounds like you don't have much of a reason to switch from your current setup, so I would think you're better off keeping what you have.

          Comment


          • #25
            Originally posted by benmoran View Post
            From these comments, it's now apparent why you don't understand why btrfs is interesting. It sounds like you don't have much of a reason to switch from your current setup, so I would think you're better off keeping what you have.
            Or maybe you drank a ton of KoolAid from PR articles.

            Comment


            • #26
              Originally posted by benmoran View Post
              You missed my point. All I was trying to say was that with btrfs you can easily cycle out ALL of the drives in the array with bigger drives, thereby resulting in a bigger array. Expanding array storage is not a corner case, it's a main issue in storage.
              This I don't really understand. You add a few drives, add them to the raid as hot-swap, then remove one by one of the small drives until mdadm activates all big drives and rewrites them with new stuff. Then you remove old drives.

              Repeat until done. After you have all drives done, you grow the array. After that, you expand fs. All the time, all your data is on the RAID-5/6.

              Does btrfs even offer RAID-5/6 at the moment ?

              Comment


              • #27
                A couple more things come to mind:

                1. feature availability :

                When it comes to btrfs, missing and yet-to-be-added features are usually written about as almost there or practically finished. With ext4, metadata expansion is shown as huge problem as new version will have to be extensively tested.

                2. objective functionality:

                I expect stability, reliability and full set of tools with a filesystem that is advertised as solid ext4 replacement for main partition. Even more so, when it is advertised as RAID/LVM killer.
                Btrfs has caused TONS of problems in the past with little indication that those are firmly in the past, with little support from tools and bad or non-existing speed performance figures. How can something kill my RAID-5 when it doesn't even have equivalent functionality? And even when it acquires it, how can you know that something won't explode in fresh parts of that code ?

                3. Data/metadata protection:

                All that is fine, checksumming stuff. But what good is that protection when filesystem itself is unstable ? I don't expect miracles from filesystem. It's just layer of abstraction between my kernel and the drive itself. It tires to anticipate next reads/writes, to readahead, cache and coalesce. That's about the limit of how much can it "improve" drive's raw speed performance. Unless off course we are talking about FLASH storage without really intelligent and capable driver with good amount of cache. With FLASH, I wand fs that is tailored, not tweaked for it.

                So I don't particularly care if btrfs is 20% or 30% ahead or behind on some test, even more so when this is consequence of the nature of filesystem. As long as this is not huge speed penalty for the common use case.
                And as long it makes optimal use of FLASH, if it is on such HW.

                But if it croaks now and then and when it does, all I'm left with is empty tool box full of spider webs and with dick in my hand, then thanks, but no, thanks. I might test it and play with it, but no way am I using it for important stuff...

                Comment


                • #28
                  Originally posted by Brane215 View Post
                  This I don't really understand. You add a few drives, add them to the raid as hot-swap, then remove one by one of the small drives until mdadm activates all big drives and rewrites them with new stuff. Then you remove old drives.

                  Repeat until done. After you have all drives done, you grow the array. After that, you expand fs. All the time, all your data is on the RAID-5/6.

                  Does btrfs even offer RAID-5/6 at the moment ?
                  Sure, give it a try. It's obvious you've never done it. It's possible to do, but requires a few extra steps

                  You said that you couldn't see any reasons for using btrfs over mdadm+lvm, and I gave you some.
                  Keep your 15TB RAID5 and have fun.

                  Comment


                  • #29
                    Originally posted by benmoran View Post
                    Sure, give it a try. It's obvious you've never done it. It's possible to do, but requires a few extra steps
                    Who cares about few extra steps ? Do you reshape your disk storage this way several times per day ?
                    I USE it every day for most of the time. So, its performance during work is by far primary for me. Qty of work at one-time maintenance work, if it is not excessive or unreasonable is totally irellevant.


                    Keep your 15TB RAID5 and have fun.
                    I don't have RAID for fun.
                    Last edited by Brane215; 03 September 2013, 10:40 AM.

                    Comment


                    • #30
                      One more thing WRT to ext4's 16TB limit.

                      I seem to remember that that limit being much lower and lifted once in not too distant past.

                      Could it be that ext4 creators were expecting the Btrfs team to do its thing and deliver solid fs some time ago ?

                      It seems that they tweaked ext4 as little as possible, just to lift the limit long enough so that btrfs could take it from there.

                      And now many of the supposedly killer features aren't event out of beta yet...

                      Comment

                      Working...
                      X