Announcement

Collapse
No announcement yet.

Mdadm .VS hardware RAID

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mdadm .VS hardware RAID

    Hello, I'm new here. I'm also curious, which would be faster, hardware raid or mdadm software raid?

    Am I correct in assuming that mdadm puts more strain on your CPU when writing/reading files?

    I think it would be cool if Phoronix wold do some benchmarks

  • #2
    Hardware raid is a hell of a lot faster as long as you are talking about a real raid controller.

    Comment


    • #3
      I don't have any direct experience, but this is what I've heard: with a small array, under low load, with a modern x86 CPU (typical desktop use), softraid is usually faster; that is, a given request will probably get the data to its destination in less time. Of course, this is at the cost of CPU utilization. As you start looking at performance under load and with larger arrays (e.g. server situations), hardware RAID can start pulling ahead because it's not bottlenecked by contention for the CPU.

      Anyway, I think the main reason that "real" RAID cards fetch so much money is that they offer features like hot-swapping and online array rebuilding in a way that allows replacement of a dead drive with zero downtime.

      Comment


      • #4
        Yeah, it's not as much about speed as to data reliability. If an OS hangs, it'll affect software RAID since it's run on CPU but should be fine with a separate RAID controller.

        Comment


        • #5
          Originally posted by nanonyme View Post
          Yeah, it's not as much about speed as to data reliability. If an OS hangs, it'll affect software RAID since it's run on CPU but should be fine with a separate RAID controller.
          Well if your OS hangs, aren't you pretty much screwed anyways?

          Or do you mean that the current read/writes to the array would be able to finish?

          Comment


          • #6
            Software raid can NEVER be *faster* (it is at best AS fast for stupid raid types, like striping) than hardware raid unless those who implemented the hardware raid were a bunch of rabid chimps on acid. Any kind of parity-raid (raid 5, 6, etc.) will ALWAYS be slower in software raid than in hardware raid, especially when the host CPU is under load.

            Hardware raid does NOT give you any hotswap and/or online array rebuilding that doesn't ALSO exist within mdraid. Yes, mdraid is happy to allow you to add/remove volumes from the array and/or rebuild the array entirely online with no downtime just like hardware raid.

            The advantage of hardware raid is performance, plain and simple. The advantage of software raid is portability (i.e., you can plug your disks into ANY controller and access your array rather than being stuck buying a new card if the card burns out), and cost (i.e., you don't need to buy a raid card for software raid).

            FAKERAID (which is a hardware device that pretends to be hardware raid when the raid function is actually entirely within software) is complete crap. It not fast, portable, OR cheap.

            Comment


            • #7
              Well fakeraid is enough for Win to get pure speed (i use 3x200 GB raid 0 for that purpose) for the boot drive. When you try Linux on it then you can try lots of things to boot from it. From a seperate boot drive you can easyly access dmraid partitions via /dev/mapper/xxx however. My 3.16 ghz quad core has got not problems with some extra cpu usage

              Comment


              • #8
                Originally posted by lbcoder View Post
                Software raid can NEVER be *faster* (it is at best AS fast for stupid raid types, like striping) than hardware raid unless those who implemented the hardware raid were a bunch of rabid chimps on acid.
                Why? All hardware RAID cards I've looked at are doing a substantial amount of their work in software on a general-purpose microprocessor that's a good deal slower than modern desktop CPUs (which makes sense if you think about the engineering constraints). Of course the implementation is thoroughly optimized for running a RAID array, but I don't think it follows that this must be faster than software RAID in all situations.

                Hardware raid does NOT give you any hotswap and/or online array rebuilding that doesn't ALSO exist within mdraid. Yes, mdraid is happy to allow you to add/remove volumes from the array and/or rebuild the array entirely online with no downtime just like hardware raid.
                I guess I could have worded that part better. My point is that if you get a hardware RAID card, those features are advertised and supported by the hardware vendor. You can of course get a cheap hotswap-capable controller card and run mdraid on that, but realistically it's probably going to be untested and unsupported unless you're buying it in a preassembled NAS or something like that.

                Comment


                • #9
                  Originally posted by NullHead View Post
                  Or do you mean that the current read/writes to the array would be able to finish?
                  Yeah, everything that's in the controller gets written to the array cleanly. That's mostly the point, me thinks.

                  Comment


                  • #10
                    Originally posted by Kano View Post
                    Well fakeraid is enough for Win to get pure speed (i use 3x200 GB raid 0 for that purpose) for the boot drive. When you try Linux on it then you can try lots of things to boot from it. From a seperate boot drive you can easyly access dmraid partitions via /dev/mapper/xxx however. My 3.16 ghz quad core has got not problems with some extra cpu usage
                    Fakeraid is mostly a joke anyway. It's mostly BIOS trickery for operating systems that don't have a useful software RAID implementation.

                    Comment


                    • #11
                      The way I see it is:

                      Use mdadm if you want to keep your data (RAID 1). Use hardware and RAID 0 if you don't care about losing all your data. If that RAID chip dies, you've lost it anyway unless you have deep pockets and can find a replacement for it.

                      Comment


                      • #12
                        Originally posted by lbcoder View Post
                        Software raid can NEVER be *faster* (it is at best AS fast for stupid raid types, like striping) than hardware raid unless those who implemented the hardware raid were a bunch of rabid chimps on acid. Any kind of parity-raid (raid 5, 6, etc.) will ALWAYS be slower in software raid than in hardware raid, especially when the host CPU is under load.
                        Define faster. Are you talking bandwidth or latency? I can see software raid on a lightly loaded system taking even the best hardware raid to the cleaners for latency, because it's is one less layer of indirection to get to the data.

                        Comment


                        • #13
                          This all depends on the usage of the server you're talking about. Hardware RAID basically offload dealing with IOs to the controler, but generally do not offer any more functionnality as already stated. And basically, a RAID array created on a specific hardware won't run on another hardware while software RAID will.
                          For a desktop system, you won't see a difference between soft and hard RAID. For a server, it'll basically depend if the server is CPU-constrained or not : I've used software RAID with ZFS for disk backups and it worked pretty well, but then again, the server was not doing anything else than writing on disk. Many server IO profiles, like web servers, show a rather limited impact on IO, so all these will benefit from the added security of software RAID, since the IO impact will be low.
                          For me, at home or in small offices, soft RAID is largely enough. For bigger sites, data that needs to be secured should be located on a disk array anyway. So I tend to consider hardware RAID controlers as mostly useless.

                          Comment


                          • #14
                            In my OP I was more thinking in terms of raw throughput/bandwidth, not necessarily latency, but that is also relevant to this thread.

                            Perhaps Phoronix could do some benchmarks of onboard RAID controllers, such as what an Nvidia chipset might have, some kind of hardware PCI/PCI-X/PCI-E RAID controller, which ever is more convenient, and one of mdadm. All using the same hard drives and testing setup.

                            Comment


                            • #15
                              ct has tested onboard raid vs software raid vs hardware raid in the past.

                              conclusion: onboard raid is a waste of time. and hardware raid has to be very good (and expensive) to pull away. Andeven then - the differences are pretty minor.

                              Comment

                              Working...
                              X