Announcement

Collapse
No announcement yet.

Best chipset for Linux software RAID?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Best chipset for Linux software RAID?

    I intend to put together a RAID-6 array using seven 320G SATA II drives (six active and one hot spare) for use as a home file server.

    Unlike most people, I am actually picking out new hardware for this rather than using old parts I already have lying around (my old parts are too old). So what I really need is hardware advice.

    As I will be using the on-board SATA ports on whatever mobo I choose I got interested in their performance. I easily found clear info on the Intel side of things: Direct Media Interface performs at 1000MB/s (it is basically a x4 PCI-E v2.0 link).

    What is not clear to me is how the link to an AMD SB7xx south bridge performs. I know it uses HyperTransport, but what version?

    While I am not too worried about day to day performance, I am interested in being able to rebuild the RAID as fast as possible when a drive fails. Does this process utilize more than one CPU core? Where's the sweet spot on CPU speed vs. cost for this? Will running a 64-bit OS help with rebuild speed? Does RAM matter?

    Something else I've wondered about is how well SATA hot-swap (not just hotplug!) works in Linux. Do SB7xx and ICH10 support this equally?

  • #2
    Hey Megaweapon,

    Not sure if this helps at all but for my home server I am running a 2.25TB NAS Raid5 that I built new 1 year ago. I used the mobo's built in raid controller.
    Specs:
    MSI G965M-FIR LGA 775 Intel G965 Express Micro ATX Intel Motherboard
    Intel Pentium 4 631 Cedar Mill 3.0GHz LGA 775 Single-Core Processor Model BX80552631
    Transcend 1GB 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Desktop Memory Model JM800QLJ-1G
    APEX TX-346 Black/Silver Steel ATX Mini Tower Computer Case ATX12V 300W Intel & AMD Listed Power Supply
    4x 750gb HD's Western Digital Caviar SE16 WD7500AAKS 750GB 7200 RPM SATA 3.0Gb/s
    1 bootable thumbdrive

    I boot the system off of a thumb drive (that I mounted on the inside of the case) so I do not loose any space in the RAID Array for the OS. My OS of choice is Freenas (FreeBSD based). I have maxed the throughput of the server at 753MB/s with cat5e cable on a giga lan with jumbo frames.

    I have no knowledge of how a drive swap would go or how long it would take as I have not had to perform any of these functions with it.

    Comment


    • #3
      Originally posted by Megaweapon View Post
      As I will be using the on-board SATA ports on whatever mobo I choose I got interested in their performance. I easily found clear info on the Intel side of things: Direct Media Interface performs at 1000MB/s (it is basically a x4 PCI-E v2.0 link).

      What is not clear to me is how the link to an AMD SB7xx south bridge performs. I know it uses HyperTransport, but what version?
      Believe it or not this is actually a common misconception. The Southbridge actually connects to the Northbridge using a PCIe x4 v1.1 interconnect. It has round about 2GB/s of bandwidth. However that doesnt mean the SATA controllers will run that fast. Pretty much all the disk related bandwidth will stay local to the Southbridge, moving from one SATA controller to another. Each SATA controller should be able to give a maximum burst rate of about 300MB/s. I'm not sure how the SATA controllers are interconnected inside the chip, so that may or may not be aggregate.

      I guess the question is how are the SATA controllers interconnected inside the chip? I dont know the answer to that.

      Comment


      • #4
        Originally posted by duby229 View Post
        Believe it or not this is actually a common misconception. The Southbridge actually connects to the Northbridge using a PCIe x4 v1.1 interconnect. It has round about 2GB/s of bandwidth.
        Looking back at how I came to that conclusion I see that you're correct. I must have been tired when I was doing that research.

        I ended up going with the AMD. Since nobody seems to know anything about the optimal chipset for this I have to assume that it's all pretty much the same.

        Comment


        • #5
          First of all now a dayz old motherboards have becum very pricey atleast 3000rs, so why don't u go for newer ones,ok anyway herez wut u want, u can buy ur motherboard from eBay

          Comment


          • #6
            Originally posted by duby229 View Post
            Believe it or not this is actually a common misconception. The Southbridge actually connects to the Northbridge using a PCIe x4 v1.1 interconnect. It has round about 2GB/s of bandwidth. However that doesnt mean the SATA controllers will run that fast. Pretty much all the disk related bandwidth will stay local to the Southbridge, moving from one SATA controller to another. Each SATA controller should be able to give a maximum burst rate of about 300MB/s. I'm not sure how the SATA controllers are interconnected inside the chip, so that may or may not be aggregate.

            I guess the question is how are the SATA controllers interconnected inside the chip? I dont know the answer to that.

            if the southbridge has a pci express connection x4 then it has 4x2.5gb/s = 10Gbit/s

            I hope you are planning on using linux software raid, cause onboard mobo raid software plain sucks.

            If you want rebuilds to go quicker, use smaller disks. If you want quicker throughput use more smaller disks. Since you're spending quite alot of money, have you considered getting a PCIe x8 SATA controller? Burst speeds would increase you wouldnt max out the southbridge and you would get abundant SATA ports while being able to choose more mobos...

            You seem pretty set on raid6, which is fair enough, but it's very cpu intensive, have you considered raid10 for more performance? I've just set up an old netapp filer disk shelf with 14x72gb SCSI FC disks on a raid10 and it works really well. Havent actually mesured the throuput yet, but this thing truly screams.

            Comment


            • #7
              Originally posted by lordmozilla View Post
              if the southbridge has a pci express connection x4 then it has 4x2.5gb/s = 10Gbit/s
              divided by 8 it comes out to roughly 2GB/s the difference between bits and bytes. if you want an exact number I can do some research and find out for you.
              Last edited by duby229; 09 March 2009, 07:21 PM.

              Comment


              • #8
                Originally posted by lordmozilla View Post
                You seem pretty set on raid6, which is fair enough, but it's very cpu intensive, have you considered raid10 for more performance? I've just set up an old netapp filer disk shelf with 14x72gb SCSI FC disks on a raid10 and it works really well. Havent actually mesured the throuput yet, but this thing truly screams.

                I'm sure that thing does scream, but damn the risk of having drives go bad with that many in an array has just increased substantially. It doesnt matter if it is redundant or not, your pretty much guaranteed to swap out a drive every couple of weeks.

                Comment


                • #9
                  Originally posted by Megaweapon View Post
                  <snip>

                  Something else I've wondered about is how well SATA hot-swap (not just hotplug!) works in Linux. Do SB7xx and ICH10 support this equally?
                  Hot-swapping is generally handled by the controller of the unit/bay you intend to purchase for this ability. Array rebuilds are then "generally" handled during the system's running state. The idea behind hot-swap is that system availability is a priority(ie: system reboots/shutdowns should be the uttermost very last option in case of a disk failure). Hotswap allows you the ability to replace a failed disk while maintaining system availability.

                  dmraid {-R|--rebuild} RAID-set [drive_name]: this can be used to rebuild the array while the system is live. more info on dmraid can be found with dmraid --help and man dmraid.

                  hth.

                  wanted to add a little more info to this post since i have recently purchased a 4disk hot-swappable bay and have had a chance to play with it a little(it uses 2.5in drives and occupies a single 5.25in bay, so i don't quite have enough laptop drives yet to do more play^H^H^H^H testing):

                  to remove a failed drive from the system:
                  hdparm -Y $DEVICE_NAME (ex: hdparm -Y /dev/sde)

                  this turns the drive off and helps prevent a voltage surge when removing a failing/failed drive.

                  remove the drive and put in the new one.

                  scsiadd -s will rescan your scsi/sata buses and present new devices to the system (on ubuntu this is not installed by default: apt-get install scsiadd).

                  then see above about rebuilding your array with dmraid.

                  hope the added info here helps.
                  Last edited by justsumdood; 11 June 2009, 01:43 PM.

                  Comment


                  • #10
                    Get a real RAID controller

                    Areca 1220 RAID controller is inexpensive and performance is excellent.

                    It may seem expensive, but put values on your data and your time, and this card is pretty cheap.

                    Jaw-dropping data transfer rates, well over 400 MB/sec, with no CPU overhead.

                    I see bottlenecks with cheap PCIe x1 SATA controllers, they are nowhere near as fast as motherboard controllers.

                    I used Software RAID for many years, and I'm not going back. This Areca card is a BIG improvement.

                    Someone said to use small disks, I agree. I use Seagate 7200.12 500 GB drives. One platter, low profile, low power, excellent performance.
                    Last edited by frantaylor; 07 July 2009, 12:20 PM.

                    Comment

                    Working...
                    X