Announcement

Collapse
No announcement yet.

Neil Brown Sends In His Final MD Pull Request

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Neil Brown Sends In His Final MD Pull Request

    Phoronix: Neil Brown Sends In His Final MD Pull Request

    Neil Brown has sent in his final MD subsystem pull request. Past this Linux 4.5 work, he's stepping down...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Thanks for your contributions Neil! I've used your work directly in setting up some storage servers!

    Comment


    • #3
      Neil has been a wonderful contributor to the kernel. I remember testing RAID patches for him 15 years ago. I built three drive arrays for that express purpose. I still have them--somewhere. My hat is off to you, good sir, thank you for all of your hard work and leadership.

      Comment


      • #4
        Yes, thanks Neil!

        I've used mdadm in nearly every system i've provisioned. He's made a wondrous contribution to the community and I thank him so much for his efforts!

        Comment


        • #5
          My thanks to Neil Brown as well. mdadm and the raid implementation is one of the (many) reasons I left Windows behind for good!

          http://www.dirtcellar.net

          Comment


          • #6
            Luckily with btrfs and checksum support, md is obsolete now.

            Comment


            • #7
              Originally posted by caligula View Post
              Luckily with btrfs and checksum support, md is obsolete now.
              No it's not. And your comment is totally stupid.

              Neil, thanks so much for your work. I've built more than 200 servers and they all use mdadm.

              THANKS!

              Comment


              • #8
                Originally posted by bulletxt View Post

                No it's not. And your comment is totally stupid.

                Neil, thanks so much for your work. I've built more than 200 servers and they all use mdadm.

                THANKS!
                Both are used for raid* ? No?

                Comment


                • #9
                  Originally posted by caligula View Post

                  Both are used for raid* ? No?
                  Do you use trucks in Formula 1?
                  Do you use Formula 1 car in the desert?

                  Oh wait they both use motors powered from petrolium so for sure one of the two must be obsolete now.

                  Comment


                  • #10
                    Originally posted by caligula View Post
                    Both are used for raid* ? No?
                    The use case is completely different.

                    - mdadm (and also some piece of dm) are used when you want to have a RAID *block device*, on which you could subsequently put anything you want:
                    partition is further (e.g.: using LVM)), or put anything you want on it (any kind of filesystem, OR swap partition, OR data partition to be exported over iSCSI OR data partition to be handed to VMs, OR data partition to be used directly by databse systems).

                    (and for HA servers, swap on RAID makes sense: you don't want your system to become corrupted when a drive dies)

                    In general, mdadm would be used for all the uses of a real hardware RAID, but at the cost of a little bit of CPU cycles, the absence of battery-powered RAM, BUT with the none of the drawbacks of a hardware RAID (basically, with hardware RAID, you need to keep a second RAID card of the same model, to be safe. Whereas with mdadm, in case of emergency, you could plug in the disk in any Linux machine).


                    - btrfsfs works at a compltely different level - RAID is tighly integrated with the filesystem.

                    Integration brings lot of small advantage: combined with the filesystem's checksuming it enables to self heal ALSO bitrot (and not only broken drives).
                    Combined with the COW proprety of the filesystem, write hole could be completely averted.
                    Bascially, all the situation where a RAID could be in an inconsistent state can be avoided or circumvented thanks to aditionnal information available on the filesystem itself.
                    (And that could not be available when using separate layers like the classical MD+LVM+EXT4 stack).

                    Also because the RAID is done at the filesystem level, the operations are done on the data itself. ie.: only on extent used by file/metadata/system, instead of every single block (or chunk or blocks) even those not used by any file.
                    Thus rebuilding is very fast (only actual data is getut ting reconstructed. Not free space).
                    In addition, this addition enables lots of weird cases: disks of variable size (only the smallest common size is usable with classic RAID), or e.g.: 2x RAID1 replication accross 3 disks (on classic RAID that's impossible, you're just replicating the same chunk of blocks accross all disks. With filesystem RAID you're doing 2x RAID1 between 2 extents and the filesystem is free to allocate those extents wherever it pleases it).

                    But file system RAID has serious limitation: basically you can only store *FILES* on it. You can't store partitions on it (and COW filesystem (and log structured, for that matters) are notoriously bad with partition images and swapfiles - they have high fragmentation on purpose by design). You can't have several different partition beyond the subvolume system offered by btrfs/zfs/etc.


                    So the TL;DR version:
                    - both do RAID
                    - but each does it in a different way
                    - they don't cover the same use cases
                    - for the end-user, BTRFS might be enough, for advanced/server/workstations loads MD answers situation that BTRFS/ZFS/etc. can't

                    Comment

                    Working...
                    X