Announcement

Collapse
No announcement yet.

Mdadm .VS hardware RAID

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    The way I see it is:

    Use mdadm if you want to keep your data (RAID 1). Use hardware and RAID 0 if you don't care about losing all your data. If that RAID chip dies, you've lost it anyway unless you have deep pockets and can find a replacement for it.

    Comment


    • #12
      Originally posted by lbcoder View Post
      Software raid can NEVER be *faster* (it is at best AS fast for stupid raid types, like striping) than hardware raid unless those who implemented the hardware raid were a bunch of rabid chimps on acid. Any kind of parity-raid (raid 5, 6, etc.) will ALWAYS be slower in software raid than in hardware raid, especially when the host CPU is under load.
      Define faster. Are you talking bandwidth or latency? I can see software raid on a lightly loaded system taking even the best hardware raid to the cleaners for latency, because it's is one less layer of indirection to get to the data.

      Comment


      • #13
        This all depends on the usage of the server you're talking about. Hardware RAID basically offload dealing with IOs to the controler, but generally do not offer any more functionnality as already stated. And basically, a RAID array created on a specific hardware won't run on another hardware while software RAID will.
        For a desktop system, you won't see a difference between soft and hard RAID. For a server, it'll basically depend if the server is CPU-constrained or not : I've used software RAID with ZFS for disk backups and it worked pretty well, but then again, the server was not doing anything else than writing on disk. Many server IO profiles, like web servers, show a rather limited impact on IO, so all these will benefit from the added security of software RAID, since the IO impact will be low.
        For me, at home or in small offices, soft RAID is largely enough. For bigger sites, data that needs to be secured should be located on a disk array anyway. So I tend to consider hardware RAID controlers as mostly useless.

        Comment


        • #14
          In my OP I was more thinking in terms of raw throughput/bandwidth, not necessarily latency, but that is also relevant to this thread.

          Perhaps Phoronix could do some benchmarks of onboard RAID controllers, such as what an Nvidia chipset might have, some kind of hardware PCI/PCI-X/PCI-E RAID controller, which ever is more convenient, and one of mdadm. All using the same hard drives and testing setup.

          Comment


          • #15
            ct has tested onboard raid vs software raid vs hardware raid in the past.

            conclusion: onboard raid is a waste of time. and hardware raid has to be very good (and expensive) to pull away. Andeven then - the differences are pretty minor.

            Comment


            • #16
              I run the OS side of most of my Sun servers in RAID1 (or read-optimized RAID10 if I need the speed), using mdadm under Linux or using the native mirroring tools under Solaris. If I have Veritas on Solaris I generally go that route.

              I like being able to go between any of my servers with the drives (as long as it takes spud brackets anyway) and being able to recover the array.
              Apps generally run on SAN hardware, so speed isn't an issue there (having 16-32GB of fast cache helps there )

              Comment


              • #17
                why are you using raid1 and not 5 or 6? Just asking.

                Comment


                • #18
                  Originally posted by energyman View Post
                  why are you using raid1 and not 5 or 6? Just asking.
                  Mirroring is more robust, particularly with software RAID: parity-based RAID has some horrible failure cases; for example, if one 2TB drive fails due to old age, the odds of a second 2TB drive failing while you try to recreate the RAID array are not insignificant.. then your entire RAID is gone.

                  Comment


                  • #19
                    not with raid6 where you can have 2 devices fail and still recover. And mirroring - what do you do when one disk reports the contents of block X as FOO but the other one as BAR. Which one has the correct block?

                    Comment


                    • #20
                      The quick answer is software RAID is faster unless your server is under intense CPU use all the time (and thus are getting the benefit of the onboard coprocessor), but I would never recommend using mdadm in anything relied on for production.

                      I have stats from a 3ware 8500 and dual HighPoint RocketRAID 1820s that say that RAID 5 and RAID 6 are slignificantly (20%+) faster in software (mdadm) than either Linux 2.6.18 driver. I don't think anyone would say that these cards are fakeRAID. Granted this is a little dated, but still.

                      However, in both cases I got lots of fake bad disks (i.e. the software RAID would detect an intermittent blip and disable the drive) which made using it this way a nightmare to maintain in a production environment. At first I thought it was bad SATA cables, or drive enclosures, or a bad batch of disks, etc. However, my arrays were degraded almost 5% of the time since one of these bad disk events would occur almost weekly under load. Eventually I decided the speed wasn't as important as the reliability of the array and switched back to hardware. Same disks/ cables/ enclosures/ controllers never had an issue in hardware RAID.

                      Comment

                      Working...
                      X