Announcement

Collapse
No announcement yet.

Building A Low-Cost Btrfs SSD RAID Array For $80

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Building A Low-Cost Btrfs SSD RAID Array For $80

    Phoronix: Building A Low-Cost Btrfs SSD RAID Array For $80

    With the falling prices of solid-state storage, it's becoming increasingly affordable to build a RAID array of SSDs. I have delivered many Btrfs RAID benchmarks on Phoronix over the years while today I have some fresh RAID0 and RAID1 numbers for Btrfs atop the latest Linux 4.5 development kernel when using two low-cost SSDs that retail for just around $40 USD a piece.

    http://www.phoronix.com/vr.php?view=22782

  • #2
    SSDs are so fast, is it even worth to RAID them?

    Comment


    • #3
      Originally posted by uid313 View Post
      SSDs are so fast, is it even worth to RAID them?
      I raid0 them in my laptop (240+256) just so theyre "one disk", not sure if there's any real performance benefit to it.

      Comment


      • #4
        Originally posted by uid313 View Post
        SSDs are so fast, is it even worth to RAID them?
        Um, the SSD disks on the market differ quite radically in terms of IOPS and read/write speed. Sometimes 2x faster is 10x more expensive. Slowest SSDs have < 10k IOPS and < 50 MB/s read/write. Fastest have > 500k IOPS and > 3 GB/s read/write. In theory, RAID-0 improves speed 2x. Even if you buy the fastest of them all, you could make it faster with RAID-0.

        Plenty of reasons to shave off some costs when building a balanced system.

        Comment


        • #5
          This btrfs RAID1 thing is quite broken, isn't it? I mean the same read speed as with one drive?

          Comment


          • #6
            Originally posted by uid313 View Post
            SSDs are so fast, is it even worth to RAID them?
            Depends on your requirements, doesn't it? If you really need as much performance as you can get, and you're already using SSDs and they're the performance bottleneck, *and* you're already using high-end PCIe-based drives (or are too cheap to pay the premium for them)... well, maybe it's worth putting an SSD array into your PC. But I can't imagine that many people will benefit from that...

            Of course, it *does* make sense when you're talking about enterprise hardware... many-terabyte databases running on an SSD-based SAN.

            Comment


            • #7
              Of course, the real reason to use Btrfs in RAID 1+ is to take advantage of the automatic file checksumming/metadata duplication.

              So if a sector on one of the drives goes bad, when the sector is read Btrfs will detect the bad sector and automatically fix the problem for you, assuming the same data on the other drive is still valid. This can also be done on demand with explicit srubbing.

              Comment


              • #8
                just putting this out there, but for read speed raid 10 with far=2 on 2 disks is much faster with mdadm for reads with similar to raid1 performance for writes (it's a bit slower on hard-disks, as it writes different parts of the disk)

                Comment


                • #9
                  These results are not really so suprising. When I benchmarked various RAID levels with different filesystems way back; most of the performance improvements were similar to what you see on an HDD by making a partition smaller, which reduces seeking; significant improvement but not a doubling in throughput. With SSD's comparative low latency and transfer speed, the benefits of RAID are more going to be peace of mind against drive failures with RAID 1.

                  Comment


                  • #10
                    Originally posted by Delgarde View Post
                    If you really need as much performance as you can get, and you're already using SSDs and they're the performance bottleneck, *and* you're already using high-end PCIe-based drives (or are too cheap to pay the premium for them)... well, maybe it's worth putting an SSD array into your PC. But I can't imagine that many people will benefit from that...
                    Nope, probably relieving I/O pressure using a large memcache would help more, or micro-service frontend machines could help throughput by reducing context switching on storage machines.
                    For a desktop though, none of this is likely to reduce latency, which is probably more significant than throughput then.

                    Comment

                    Working...
                    X