Announcement

Collapse
No announcement yet.

Best chipset for Linux software RAID?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by lordmozilla View Post
    You can run X25's from an external raid controller just like your harddrives.

    I'm not discussing your controller, I mean hardware controllers are better, just annoying if they fail. ;-)

    I'm saying that for speed getting 7200rpm drives - no matter how many of them is silly these days when ssd's easily put drives like that out of th water. Since they are SATAII raiding them with a good card would put 600MB/s into possible range.

    113MB/s... two SSD's in hardware raid0 will easily beat that. Lower power consumption, lower failure etc.... less space, no noise... List goes on forever.

    Have you even looked at Intel X25 benchmarks?
    Don't forget, that 115 MB/sec is for ONE $50 drive.

    Yes I did. 250 MB/sec. Very impressive. But I can get the same bandwidth and 16X the capacity for 1/3 the price by using two cheap disks.
    Last edited by frantaylor; 07-07-2009, 07:38 PM.

    Comment


    • #17
      Originally posted by frantaylor View Post
      Don't forget, that 115 MB/sec is for ONE $50 drive.

      Yes I did. 250 MB/sec. Very impressive. But I can get the same bandwidth and 16X the capacity for 1/3 the price by using two cheap disks.
      I'm not convinced you can get that in sustained output at all. plus with raid0 overhead which are massive with 7 disks even with a brilliant controller..

      We'll just have to agree to disagree.

      Brendan

      Comment


      • #18
        i use 8x1tb disks in raid6 on an intel DG43NB, together with a pcie x1 sata controller using the sil3132 chipset..

        the disks are WD greenpower 5400rpm, and if we talk sequential performance, it does quite well:
        ida:~# dd if=/dev/md0 of=/dev/null bs=5M count=1000 iflag=direct
        5242880000 bytes (5.2 GB) copied, 13.535 s, 387 MB/s

        Comment


        • #19
          Originally posted by frantaylor View Post
          Yes I did. 250 MB/sec. Very impressive. But I can get the same bandwidth and 16X the capacity for 1/3 the price by using two cheap disks.
          Please note that these numbers do not tell the whole story.

          First, the 250MB/s for the X25 are constant throughout the disk, whereas the 115MB/sec will fall rapidly towards ~70MB/s as the disk fills up.

          Second, the X25 has an enormous advantage in latency and IOPS - think 1-3 orders of magnitude better performance. This will make a tremendous difference in your case (parallel VMs) - I wouldn't be surprised to see even a *single* X25 drive outperform the RAID configuration in response times *and* transfer speeds under heavy load (e.g. 3 VMs reading/writing full throttle.)

          Third, SSD performance scales almost linearly with their number. 4 drives and you've hit the TB/s mark, with latency almost completely unaffected (unlike mechanical drives).

          Finally, these drives should be much more reliable than mechanical disks.

          Of course, the size/price ratio is where it all falls down. SSD prices will probably not catch up to mechanical drives, but they seem to be falling at an impressive rate (the X25 was more than double the price half a year ago). If they keep falling at this rate, they could actually become affordable by the end of next year.

          IIRC, Intel is planning to announce new SSD models this or next week, so hopefully we'll see larger models soon.
          Last edited by BlackStar; 07-08-2009, 09:01 AM.

          Comment


          • #20
            Originally posted by BlackStar View Post
            Please note that these numbers do not tell the whole story.

            First, the 250MB/s for the X25 are constant throughout the disk, whereas the 115MB/sec will fall rapidly towards ~70MB/s as the disk fills up.

            Second, the X25 has an enormous advantage in latency and IOPS - think 1-3 orders of magnitude better performance. This will make a tremendous difference in your case (parallel VMs) - I wouldn't be surprised to see even a *single* X25 drive outperform the RAID configuration in response times *and* transfer speeds under heavy load (e.g. 3 VMs reading/writing full throttle.)

            Third, SSD performance scales almost linearly with their number. 4 drives and you've hit the TB/s mark, with latency almost completely unaffected (unlike mechanical drives).

            Finally, these drives should be much more reliable than mechanical disks.

            Of course, the size/price ratio is where it all falls down. SSD prices will probably not catch up to mechanical drives, but they seem to be falling at an impressive rate (the X25 was more than double the price half a year ago). If they keep falling at this rate, they could actually become affordable by the end of next year.

            IIRC, Intel is planning to announce new SSD models this or next week, so hopefully we'll see larger models soon.
            This is all well and good, but I need to have ~1.5 Tb online, and there's just no way I am going to be able to afford to do that with SSD drives.

            And the problem with using a RAID array of SSD is that you are just moving the bottleneck to some internal bus. More than 2 or 3 in parallel is just a waste. You'll have enormous bandwith at the SATA connectors, but the PCIe x8 bus connector on the RAID controller is not getting any faster.
            Last edited by frantaylor; 07-08-2009, 10:29 AM.

            Comment


            • #21
              Agreed, SSDs are simply not cost-effective at this point. 1.5TB costs something like $6000-6500 - no good.

              I'm pretty sure the RAID processor would max out before the PCIe bus is saturated. A gen-1 8x PCIe slot provides 2GB/s bandwidth in both directions (wikipedia) and gen-2 should be double that. These are close to the rates of DDR2 memory and I doubt RAID cards are designed with these kind of numbers in mind.

              Comment


              • #22
                Of course the filesystem is all important on a RAID system. I am using XFS. I have defragmented all my VMware images and I mount with "allocsize=512m" to cut down on new fragmentation.

                With defragmented XFS image files, my VMware images are seeing excellent disk performance, almost as good as real disks.

                XFS seems like the only choice for this application. I accidentally formatted a big RAID drive as ext3 once, it was so slow that I got totally disgusted about 2 minutes into using it.

                Defragging makes a HUGE difference in throughput on these big files. I have no experience with SSDs, is defragging as important there? It seems like it wouldn't matter so much because there is no seek delay.

                If you are going to defragment, you need to make sure the drives are not too full, or else the defragmentation process has no elbow room.
                Last edited by frantaylor; 07-09-2009, 01:47 PM.

                Comment

                Working...
                X