bootable raid 0, mdadm or native btrfs ?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • miliki
    Junior Member
    • Oct 2021
    • 5

    bootable raid 0, mdadm or native btrfs ?

    Hello,

    what's your opinions or experience in bootable raid 0?
    Is it better to go with mdadm and btrfs on top of it or just go for native btrfs raid?

    I'm guessing if there was a problem at the mdadm level, btrfs would be blind to it. So in this case it's better to go full native btrfs raid.
    On the other hand, mdadm is "labeled" as more mature solution. Up to what point is this just lingering old opinions/thoughts,still true or matters in 2021 or does anyone still really cares or makes any difference. I don't know.

    thanks.
  • jbean
    Junior Member
    • Nov 2019
    • 20

    #2
    You can use btrfs on top of md since it's independent of the filesystem. So if btrfs snapshots are your goal, there's not much difference on the surface. That said, I find it more complicated to set up if your goal is to use it for root, tho I suppose it depends what you're going for, what distro you're using and how you plan to configure it.

    In a regular raid0, whether you use btrfs or MD, if a single disks fails, your data is toast. This isn't a shortcoming of MD nor Btrfs, it's just the nature of RAID0. It's stiping only, no redundancy.

    If you need redundancy as well as speed, something like RAID10 would be much better (tho requires 4 disks). In this case, Btrfs RAID10 is by far the safer solution since it's checksumming can repair data on the fly, and you can check for issues/repair them with scrub.

    The main benefits to Btrfs RAID over just using MD for raid0 is you have the flexibility to rebalance to any other raid profile, and add and remove disks on the fly. If you add more, it's pretty simple to rebalance all the data (that is, rewriting it) to all the disks. You can easily upgrade, in place, on a live system, to RAID10 if you get more disks in the future so you get the added redundancy without having to redo everything.

    The downsides to btrfs in general, apart from raid5/6 being completely unstable, is it's not always suitable on hard disks for specific use cases. If you plan to use VMs or databases (any use case where the file gets a lot of really small random writes that are not appended), Btrfs fragments far faster than ext4 or XFS due to its copy on write nature. You can defrag btrfs, but defragging breaks snapshots and reflinks.

    While this isn't always noticeable to SSD users (although it can be in extreme cases), it is on hard disks, to the point of dragging the entire system to a halt at times in the most extreme cases. The solution is to use the nocow attribute beforehand, in anticipation of this (you can't set nocow to existing files without rewriting them) and/or enable autodefrag on the filesystem (I always do this even on SSD... this isnt quite the same thing as manual defrag btw). "Autodefrag" groups small writes into larger portions by reading adjacent data, and while this does work well for workstations, it isn't a good solution for a lot of VMs because of the obvious write amplification. While autodefrag does break reflinks as well like manual defrag does, it's only for the portions of files where it's used, so your snapshots don't get totally duplicated. It's a good tradeoff for workstation users, but not suitable for server use cases.

    So TL;DR: really depends what your goal is. I'm a firm believer that all these solutions have their pros and cons and people should pick the best for their use case. However reconsider RAID0 if you're going for redundancy, regardless what method you go with (I say this only because you mention btrfs would be blind to md raid0 failure even tho it's the same thing in this case :P)

    Comment

    • miliki
      Junior Member
      • Oct 2021
      • 5

      #3
      jbean, thanks for answering.

      o.s. = kde neon

      I'm running out of space. So while i'm considering adding a new sata-ssd drive, take advantage of that time to think of reinstalling the o.s. and maybe get some potential speed boost.
      The goal (just thinking of it, no real imminent commitment) is speed from the get go, "bitrot" notification and that's it, no other features.
      I would restore from backup if something fails.


      The idea is to boot straight from raid 0.

      I've just tried to simulate this in a vm.
      So 1 partition esp, 1 partition for root and the rest of it for Home.
      Partitions are created, 2 btrfs raid-0 created (root and home), 2 mounting points created and raid mounted on them. Sadly when the o.s. installation starts, it only sees devices, not the mounted points/folders. So right there, it's a non-starter for btrfs-raid0 boot drive.

      Thanks to virtual machines, I'll have "fun" considering/testing the other setups/scenarios before committing to anything.

      Thanks again.

      Comment

      • jbean
        Junior Member
        • Nov 2019
        • 20

        #4
        Btrfs does not need to run as RAID0 fwiw just to use two disks. It can span two disks even with the default (single) block profile. Of course, this doesn't allow you to take advantage of speed, but if the two disks are mismatched sizes, it can span both (ie a 500GB disk and a 1TB disk = 1.5TB usable). You can also gain even more speed with filesystem compression, since ofc any compressible data "appears" to the system as writing faster.

        As for home and root, with btrfs this should be a *single* partition. I'm not familiar how the Neon installer handles this, but with btrfs you typically use subvolumes for this. Subvolumes can be thought of like partitions, but they're all one filesystem and appear as directories when mounted with the "top level" subvolume.

        easily distinguished with
        Code:
        btrfs subv list /mnt/point
        With the ubuntu installer, it automatically creates a root subvolume called "@" and a home subvolume called "@home", and then each are mounted in the fstab accordingly even though they are the same filesystem (if you want to limit one or the other afterwards, you can use quotas/qgroups but there are performance constraints to it when working with a lot of snapshots). This is one of the other big benefits to btrfs since now you don't need to allocate each ahead of time and have to screw around with resizing things if you need more space for home that you need to borrow from root or vice versa.

        I'm not certain how the Neon installer handles this compared to Ubuntu, if it's identical or not, but it's pretty simple to move everything in the end if needed. Of course, other distros like Manjaro, Fedora and OpenSUSE do this as well, and with something like Arch you can do this all manually with much finer control over it. And of course, you could even take this a step further and install multiple distros all to the same btrfs filesystem, separated by subvolumes, but now we're getting complicated :P
        Last edited by jbean; 07 October 2021, 04:03 PM.

        Comment

        • miliki
          Junior Member
          • Oct 2021
          • 5

          #5
          Hello,

          some weird effects, or maybe I'm missing something. Hopefully someone has a clue or can help.
          I'v done, more or less, what I've mentioned above, in a vm for test, before I do it in real machine.

          Hopefully I'm not bothering anyone or what I'm writing isn't to complicated.

          so:
          2 drives of 100G each

          1st drive has:
          --- vda1= 1GB for esp
          --- vda2= 50GB for / , btrfs
          --- vda3= 49GB unmounted, btrfs (will be used for the raid)
          2nd drive has:
          --- vdb1= 100GB, unmounted, btrfs (will be used for the raid)

          Proceed with installation of o.s. on 1st drive like normal with above manual partitioning. Got no problem and rebooted.
          Login with user.
          -run " sudo systemctl isolate rescue.target " apparently to log in as root and be able to move /home to the raid.
          -create temporary mounting point for the raid ( mkdir /raid )
          -create raid ( mkfs.btrfs -L home -d raid0 -m raid0 -f /dev/vda3 /dev/vdb1 ) Will display an UUID than can be used.
          -mount raid ( mount UUID=.... /raid )
          -copy /home to raid ( cp -rp /home/* /raid )
          -edit fstab to automount the raid ( nano /etc/fstab and add "UUID=... /home auto defaults,noatime 0 0" )
          -reboot

          seems to have worked. 149GB raid0 in which there's only a /home folder, or /home is the raid.

          Now for the weirdness:
          1-Dolphin (kde file manager) displays in the device section two devices named "home". I guess one is the current mounted by fstab and the other is what? the old "home" folder? (how could that been seem) Is this correct? Why display it? just why?

          2-Dolphin says that there's only , more or less, 99G free space. Only thing in the raid is the home folder. That folder only has a few megs. It should be more or less 148GB free (the raid of 149G minus the few MB of the folder). What's the problem?
          ----"df -H /home", also gives weird results
          ----running "btrfs filesystem usage /home" gives a more accurate picture of used data and free space.
          ----When copying a file to a folder (for ex: 1GB file), Dolphin now says there's only 98GB free (the original 99 minus 1GB of the file). I'm afraid that Dolphin, and for that matter, other programs will say "can't copy anymore" or "no more free space" when in fact there's loads of it.
          ----It's like the entire 49GB of the 1st drive of the raid is out of sight or Dolphin (and df) can't comprehend btrfs-raid 0. Or is just something else completly that I don't understand.

          Comment

          Working...
          X