Announcement

Collapse
No announcement yet.

4-Disk Btrfs Native RAID Performance On Linux 4.10

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • 4-Disk Btrfs Native RAID Performance On Linux 4.10

    Phoronix: 4-Disk Btrfs Native RAID Performance On Linux 4.10

    While I have already posted some single-disk file-system benchmarks on Linux 4.10, for some benchmarking fun this weekend I decided to run some fresh tests of Btrfs RAID capabilities using four solid-state drives (SSDs).

    http://www.phoronix.com/vr.php?view=24083

  • #2
    @Michael:

    I've mentioned this before, but when using SSDs (where there is no seek penalty for mirrored writes), please do consider using the 'far' layout for two-disk RAID-10 (aka RAID-1E) and four-disk RAID-10.

    In theory, the read performance will mostly likely trail the two/four-disk RAID-0 only very slightly, and the write rate should be comparable to normal RAID-1/RAID-10 because writes can be processed in parallel on a different controller channel.

    I *may* receive some older 120-128GB SSDs in the mail one of these days, and if I do, I'll be sure to run this benchmark with the 'far' layout results included.

    Comment


    • #3
      It would be interesting to see how btrfs' native RAID compares with mdraid in similar disk and RAID configurations.

      Comment


      • #4
        I would understand if an SSD is better to use ext4 or Btrfs for Ubuntu. What do you advise?

        Comment


        • #5
          Originally posted by Charlie68 View Post
          I would understand if an SSD is better to use ext4 or Btrfs for Ubuntu. What do you advise?
          ext4, when they add f2fs support to GRUB then it's f2fs the best for SSDs.

          Comment


          • #6
            Originally posted by Charlie68 View Post
            I would understand if an SSD is better to use ext4 or Btrfs for Ubuntu
            if you need btrfs features then you have no choice. if you don't need them then you don't need btrfs

            Comment


            • #7
              why raid1 is slower at reads than raid0?

              Comment


              • #8
                Originally posted by starshipeleven View Post
                ext4, when they add f2fs support to GRUB then it's f2fs the best for SSDs.
                You can use F2FS as a root partition just fine, IIRC you just need the initramfs to be on your boot partition with it supporting F2FS, which is what I'm doing.

                Originally posted by pal666 View Post
                why raid1 is slower at reads than raid0?
                RAID1 is data redundancy, RAID0 is data interlacing meant for performance.

                Comment


                • #9
                  Originally posted by pal666 View Post
                  why raid1 is slower at reads than raid0?
                  Afaik btrfs RAID1 is still reading from "a single drive". i.e. it is not spreading reads evenly on the drives in the array like block-level RAID1 does.

                  Comment


                  • #10
                    Originally posted by AsuMagic View Post
                    You can use F2FS as a root partition just fine, IIRC you just need the initramfs to be on your boot partition with it supporting F2FS, which is what I'm doing.
                    I know it's doable, I was tailoring the answer for the user.
                    Users talking of ubuntu and asking what is better between ext4 or btrfs don't seem to be veteran enough to be able to pull this off if the installer isn't doing it (and the installer is NOT doing it even if it would be trivial).

                    RAID1 is data redundancy, RAID0 is data interlacing meant for performance.
                    RAID1 should theoretically have same read speed as RAID0 as it's reading from both drives at the same time.

                    Comment

                    Working...
                    X