Originally posted by Brane215
View Post
I've used MDADM and MDADM+LVM myself for years, and it does work well. LVM even has some practical advantages such as being able to expose raw block devices for iscsi mounts, etc. Btrfs cannot do that, as subvolumes are not partitions, but one could use a file instead of an LVM block device instead. You may have to adjust your work process a bit, but btrfs is just infinitely more flexible overall.
I run my RAID-5/6 arrays for quite some time without much trouble. During that time i restripped and expanded them a few times, without bad consequnces.
I'll give you another extreme example: Lets say you have a 12 disk x 1TB RAID array. If you wanted to upgrade to 2TB disks, with btrfs you could just remove a few disks (space permitting), and add a few bigger ones. This would be time consuming, but it's possible to upgrade an entire array like this without downtime. With MDADM, it's possible but significantly more difficult as the original disk sizes are stored in the metadata. It takes some... work, so it's usually just easier to build a new array and copy the data over.
With RAID-5 i know that i have one equivalent of one drive for checksums.
In my opinion the main thing holding btrfs back from production use at the DC, is userspace tools. The btrfs utilities are very simple for working with arrays, but there is not yet a lot of automation for dealing with removing bad disks from arrays, or working with hot-spares, etc. I hoping this comes soon.
Comment