Announcement

Collapse
No announcement yet.

Btrfs On 4 x Intel SSDs In RAID 0/1/5/6/10

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Btrfs On 4 x Intel SSDs In RAID 0/1/5/6/10

    Phoronix: Btrfs On 4 x Intel SSDs In RAID 0/1/5/6/10

    Earlier this month I published Btrfs RAID benchmarks on two HDDs but as some more interesting results are now Btrfs RAID file-system benchmarks when testing the next-generation Linux file-system across four Intel Series 530 solid-state drives. All RAID levels supported by the Btrfs file-system were benchmarked atop Ubuntu 14.10 with the Linux 3.18-rc1 kernel: RAID 0, 1, 5, 6, and 10 levels along with testing a Btrfs single SSD setup and a Btrfs file-system linearly spanning all four drives.

    http://www.phoronix.com/vr.php?view=21104

  • #2
    Please also test ZFS on Solaris 11.2/FreeBSD/Linux

    Originally posted by phoronix View Post
    Phoronix: Btrfs On 4 x Intel SSDs In RAID 0/1/5/6/10

    Earlier this month I published Btrfs RAID benchmarks on two HDDs but as some more interesting results are now Btrfs RAID file-system benchmarks when testing the next-generation Linux file-system across four Intel Series 530 solid-state drives. All RAID levels supported by the Btrfs file-system were benchmarked atop Ubuntu 14.10 with the Linux 3.18-rc1 kernel: RAID 0, 1, 5, 6, and 10 levels along with testing a Btrfs single SSD setup and a Btrfs file-system linearly spanning all four drives.

    http://www.phoronix.com/vr.php?view=21104
    Hello,

    with such nice setup it's really a pity not to test ZFS on Solaris 11.2, FreeBSD 10.x and Linux, don't you think? :-)

    Thanks!
    Karel

    Comment


    • #3
      The comparison "Btrfs single + mdraid lvl X" vs "the same raid level in btrfs alone" would be interesting. Especially, since you can change the stripe size in mdraid -- in btrfs it's fixed to 64k AFAIK.

      Comment


      • #4
        Thanks for these tests, Michael.
        There were some very interesting (nonlinear) results.
        Is love to see two things added in future tests: mdadm raid for comparison, and CPU usage.
        The scalability of btrfs should be improving, especially now that it's developers are working at a place where they have easy access to absolutely monstrous arrays.

        Comment


        • #5
          Originally posted by kgardas View Post
          Hello,

          with such nice setup it's really a pity not to test ZFS on Solaris 11.2, FreeBSD 10.x and Linux, don't you think? :-)

          Thanks!
          Karel
          Likely too much work without multiple premium users and/or donaters requesting such tests.
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            I have been using a RAID 5 for quite some time now on 3 HDD drives (3 1TB seagate black).

            I started with 1 black, 1 blue and 1 green (all 1TB), and then replace the blue and the green for 2 black and all went fine. The system had already suffered some forced power offs and the array was completely recovered without any loss (that I notived) and I think RAID on Btrfs (even 5/6) is pretty solid.

            Comment


            • #7
              RAID 5/6 is still considered experimental for Btrfs but in the few days I've been running those configurations I haven't encountered any problems.
              That's because you're testing performance under normal conditions, not reliability under abnormal conditions. The purpose of RAID 5/6 resiliency, and it's not fit for purpose there yet. A single disk failure can take down the whole array, making it not much better than RAID 0 in its current state.

              Comment


              • #8
                Why are the RAID 1 reads so slow? You've got 4 sources you can read from that can be farmed out. RAID 0 and NON-RAID seems to be farming it out to the multiple sources.

                Comment


                • #9
                  Originally posted by mufasa72 View Post
                  Why are the RAID 1 reads so slow? You've got 4 sources you can read from that can be farmed out. RAID 0 and NON-RAID seems to be farming it out to the multiple sources.
                  Probably because the btrfs code doesn't yet farm it out in RAID 1 cases, even though it should. Might be worth filing a bug report in case someone else hasn't already.

                  Comment


                  • #10
                    Originally posted by gigaplex View Post
                    Probably because the btrfs code doesn't yet farm it out in RAID 1 cases, even though it should. Might be worth filing a bug report in case someone else hasn't already.
                    Then again I'm not sure how non-RAID sequential reads are significantly faster than single disk, as if it's actually a linear span like Michael claimed, it won't have multiple sources to read from.

                    Comment

                    Working...
                    X