Announcement

Collapse
No announcement yet.

Btrfs RAID 0/1 Benchmarks On The Linux 4.1 Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Btrfs RAID 0/1 Benchmarks On The Linux 4.1 Kernel

    Phoronix: Btrfs RAID 0/1 Benchmarks On The Linux 4.1 Kernel

    With the Linux 4.1 kernel coming together nicely I've begun my testing (separate from all the fully-automated Git testing done each day via the LinuxBenchmarking.com systems) of this new kernel under a variety of different workloads, stressing different systems, and focusing on the changes in the major subsystems. One of the systems this week has been running some fresh Btrfs RAID Linux file-system benchmarks. From an eight-disk server I've started this Btrfs RAID testing as some fresh numbers since my Btrfs RAID tests from a few months back on an older server.

    http://www.phoronix.com/vr.php?view=21702

  • #2
    I'm a bit surprised to see how slow the RAID1 read tests were, at least for the 4-disk setup. Not sure if this is because of Btrfs, the controller, or just the nature of RAID1. I don't think I've ever seen benchmarks of a 4-disk RAID1 setup so I wouldn't really know.

    Comment


    • #3
      Theoretically RAID1 can be as fast as all disks combined for sequential reads as long as the RAID controller supports proper load balancing. I'm not sure about how btrfs implements load balancing. It rarely happens though because most reads aren't very sequential. You'll only get maximum performance when reading large sequential files and most linux systems are comprised of thousands of very tiny files where access latency becomes the bottleneck.

      Comment


      • #4
        Nice benchmarks!
        Mind including the "single JBOD" config in the tests, too?

        Comment


        • #5
          Originally posted by duby229 View Post
          Theoretically RAID1 can be as fast as all disks combined for sequential reads as long as the RAID controller supports proper load balancing. I'm not sure about how btrfs implements load balancing. It rarely happens though because most reads aren't very sequential. You'll only get maximum performance when reading large sequential files and most linux systems are comprised of thousands of very tiny files where access latency becomes the bottleneck.
          Curiously, does compressing the small-files sections (like the system directories), particularly if they're text, help reduce the this access? I know it used to in to a degree on the old spinning discs. But nowadays, all this newandangled stuff...

          Comment


          • #6
            Originally posted by stiiixy View Post

            Curiously, does compressing the small-files sections (like the system directories), particularly if they're text, help reduce the this access? I know it used to in to a degree on the old spinning discs. But nowadays, all this newandangled stuff...
            Well, RAID on SSDs has entirely different characteristics, Access latency is smaller problem.

            Speaking about compressing small files, I keep a squashfs filesystem mounted just for that purpose. It gets synced every night so the next days filesystem is last nights image. It really does help improve performance when accessing many small files.

            Comment


            • #7
              Originally posted by duby229 View Post
              Theoretically RAID1 can be as fast as all disks combined for sequential reads as long as the RAID controller supports proper load balancing. I'm not sure about how btrfs implements load balancing. It rarely happens though because most reads aren't very sequential. You'll only get maximum performance when reading large sequential files and most linux systems are comprised of thousands of very tiny files where access latency becomes the bottleneck.
              It should speed up any reads, after the reads are reorganized for head position it should just send batches to both.
              And since when does anyone use four drives for raid 1? Did he mean to test 0+1?

              Comment


              • #8
                I'd really like to see some benchmarks analyzing the performance of btrfs with snapshots (since that's why most people use it). Something like:
                1. write data
                2. snapshot
                3. read & write, modifying ~5% of the data
                4. repeat from 2.

                My anecdotal experience suggests that it slows down massively for the common use case, but it'd be nice to see some verification of this.

                Comment


                • #9
                  Recently I've built a NAS (MicroServer N40L, running Fedora 21) for my colleague, with 5 disks. 4*2TB WD Green form a Btrfs RAID5.

                  I've tired the write to Btrfs RAID5, the write speed (using both 4K and 1M block size) ranged from 31x MB/s ~ 330 MB/s the guy was pretty happy about the result.

                  Screenshot -> https://flic.kr/p/sbZEVx
                  Last edited by terrywang; 05-20-2015, 03:00 AM. Reason: remove link

                  Comment


                  • #10
                    Originally posted by toyotabedzrock View Post
                    And since when does anyone use four drives for raid 1?
                    I do. But with Btrfs it's a bit different from what you would expect. It doesn't keep four copies of the same data. It still only keeps two copies on different disks. The advantage is that you can have, say, three 1TB disks and one 3TB disk and you get 3TB mirrored array.

                    Originally posted by toyotabedzrock View Post
                    Did he mean to test 0+1?
                    I'd like to see such tests too. But I doubt that's what Michael did because it should have performed quite a bit better than two disk RAID1.

                    Comment

                    Working...
                    X