Announcement

Collapse
No announcement yet.

Neil Brown Sends In His Final MD Pull Request

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • suberimakuri
    replied
    Thanks Neil!
    Being using mdadm on every multi drive desktop for years. Brilliant.

    Thanks DrYak for run down too..

    Leave a comment:


  • sarfarazahmad
    replied
    Thank you Neil ! mdadm was one of the coolest features when it debuted back then hardware raid was very stupid. mdadm is still very much relevant and useful today. Its features like these that have helped Linux grow in the datacenter to the place that it is at today.

    Leave a comment:


  • DrYak
    replied
    Originally posted by caligula View Post
    Both are used for raid* ? No?
    The use case is completely different.

    - mdadm (and also some piece of dm) are used when you want to have a RAID *block device*, on which you could subsequently put anything you want:
    partition is further (e.g.: using LVM)), or put anything you want on it (any kind of filesystem, OR swap partition, OR data partition to be exported over iSCSI OR data partition to be handed to VMs, OR data partition to be used directly by databse systems).

    (and for HA servers, swap on RAID makes sense: you don't want your system to become corrupted when a drive dies)

    In general, mdadm would be used for all the uses of a real hardware RAID, but at the cost of a little bit of CPU cycles, the absence of battery-powered RAM, BUT with the none of the drawbacks of a hardware RAID (basically, with hardware RAID, you need to keep a second RAID card of the same model, to be safe. Whereas with mdadm, in case of emergency, you could plug in the disk in any Linux machine).


    - btrfsfs works at a compltely different level - RAID is tighly integrated with the filesystem.

    Integration brings lot of small advantage: combined with the filesystem's checksuming it enables to self heal ALSO bitrot (and not only broken drives).
    Combined with the COW proprety of the filesystem, write hole could be completely averted.
    Bascially, all the situation where a RAID could be in an inconsistent state can be avoided or circumvented thanks to aditionnal information available on the filesystem itself.
    (And that could not be available when using separate layers like the classical MD+LVM+EXT4 stack).

    Also because the RAID is done at the filesystem level, the operations are done on the data itself. ie.: only on extent used by file/metadata/system, instead of every single block (or chunk or blocks) even those not used by any file.
    Thus rebuilding is very fast (only actual data is getut ting reconstructed. Not free space).
    In addition, this addition enables lots of weird cases: disks of variable size (only the smallest common size is usable with classic RAID), or e.g.: 2x RAID1 replication accross 3 disks (on classic RAID that's impossible, you're just replicating the same chunk of blocks accross all disks. With filesystem RAID you're doing 2x RAID1 between 2 extents and the filesystem is free to allocate those extents wherever it pleases it).

    But file system RAID has serious limitation: basically you can only store *FILES* on it. You can't store partitions on it (and COW filesystem (and log structured, for that matters) are notoriously bad with partition images and swapfiles - they have high fragmentation on purpose by design). You can't have several different partition beyond the subvolume system offered by btrfs/zfs/etc.


    So the TL;DR version:
    - both do RAID
    - but each does it in a different way
    - they don't cover the same use cases
    - for the end-user, BTRFS might be enough, for advanced/server/workstations loads MD answers situation that BTRFS/ZFS/etc. can't

    Leave a comment:


  • bulletxt
    replied
    Originally posted by caligula View Post

    Both are used for raid* ? No?
    Do you use trucks in Formula 1?
    Do you use Formula 1 car in the desert?

    Oh wait they both use motors powered from petrolium so for sure one of the two must be obsolete now.

    Leave a comment:


  • caligula
    replied
    Originally posted by bulletxt View Post

    No it's not. And your comment is totally stupid.

    Neil, thanks so much for your work. I've built more than 200 servers and they all use mdadm.

    THANKS!
    Both are used for raid* ? No?

    Leave a comment:


  • bulletxt
    replied
    Originally posted by caligula View Post
    Luckily with btrfs and checksum support, md is obsolete now.
    No it's not. And your comment is totally stupid.

    Neil, thanks so much for your work. I've built more than 200 servers and they all use mdadm.

    THANKS!

    Leave a comment:


  • caligula
    replied
    Luckily with btrfs and checksum support, md is obsolete now.

    Leave a comment:


  • waxhead
    replied
    My thanks to Neil Brown as well. mdadm and the raid implementation is one of the (many) reasons I left Windows behind for good!

    Leave a comment:


  • rnavarro
    replied
    Yes, thanks Neil!

    I've used mdadm in nearly every system i've provisioned. He's made a wondrous contribution to the community and I thank him so much for his efforts!

    Leave a comment:


  • willmore
    replied
    Neil has been a wonderful contributor to the kernel. I remember testing RAID patches for him 15 years ago. I built three drive arrays for that express purpose. I still have them--somewhere. My hat is off to you, good sir, thank you for all of your hard work and leadership.

    Leave a comment:

Working...
X