Announcement

Collapse
No announcement yet.

4 x SSD Btrfs/EXT4 RAID Tests On Linux 4.15

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 4 x SSD Btrfs/EXT4 RAID Tests On Linux 4.15

    Phoronix: 4 x SSD Btrfs/EXT4 RAID Tests On Linux 4.15

    Using the high-end SilverStone TS421S 4-disk SATA drive enclosure, I've been carrying out a number of Btrfs and EXT4 file-system multi-disk benchmarks over the past week. Here are the latest numbers for how Btrfs' native RAID capabilities are running up against EXT4 when using MDADM "soft" RAID.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    EXT4 meanwhile offered better sequential write performance.
    Well, except it didn't. The fastest ext4 setup is faster than the fastest btrfs setup. But when comparing the respective configurations, it looks a little different. I.e. btrfs is faster for RAID6 and RAID5.

    It would be nice, if the plots were scaled to EXT4 performance. AND it would be nice to show mdraid's impact by including plain btrfs on mdraid.

    Comment


    • #3
      Did you manage to capture CPU and/or RAM usage during the runs? Some of those results look like it could be affected by either hardware, as you noted at times, regressions, but being able to see if the FS's are using RAM dynamically would be good.

      Good to see RAID0 making a comeback. I always liked having swap and scrarch etc on that. Could be interesting on NVMe devices.
      Hi

      Comment


      • #4
        There is something wrong here. Several tests do not make sense.
        I have a old 64GB OCZ-VERTEX 2 with an A6-3670 and 4GB DDR3 RAM. It outperforms several 1 disk benches here.

        I mean according to the graphs on page 1 ext4 (1 disk) sequential read does around 76 MB/s.
        With a 500MB/s capable SSD? impossible

        I think either that sata adapter is absolute crap or there is a bug in 4.15
        Can you retest with a real HBA? Like LSI SAS 9207-8i or so?

        Comment


        • #5
          Wow. That was some utterly broken performance.
          1 single disk ext4 completes sqlite test 20 times faster than a 4 disk raid5 ext4?
          The rest is just all over the place.

          Comment


          • #6
            I use ext4 in a 4 disk raid6 setup which was a 2 disk raid 1 setup before. Your results are pretty much what I encountered. The system is a rsync target, which took most of the time around 15 minutes to finish when using the raid 1 setup, now it takes more than 50 minutes on the raid 6 setup. This is just horrible. CPU usage is around 3-5% on a HP Microserver gen8 using the celeron processor.

            Comment


            • #7
              Four disk RAID0 on EXT4 was very fast in IOzone.
              Hey, after looking on results it seems "Four disk RAID0 on BTRFS was very fast in IOzone"? At least picture shows us btrfs raid0 bar skyrocketing, not ext4.

              Comment


              • #8
                Originally posted by SystemCrasher View Post
                Hey, after looking on results it seems "Four disk RAID0 on BTRFS was very fast in IOzone"? At least picture shows us btrfs raid0 bar skyrocketing, not ext4.
                Came here too to mention about that.

                Comment


                • #9
                  Originally posted by milkylainen View Post
                  Wow. That was some utterly broken performance.
                  1 single disk ext4 completes sqlite test 20 times faster than a 4 disk raid5 ext4?
                  It's certainly plausible, if there are single-block writes that translate into read-mod-writes of a whole stripe, on the RAIDs.

                  Comment


                  • #10
                    Originally posted by mistvieh View Post
                    I use ext4 in a 4 disk raid6 setup which was a 2 disk raid 1 setup before. Your results are pretty much what I encountered. The system is a rsync target, which took most of the time around 15 minutes to finish when using the raid 1 setup, now it takes more than 50 minutes on the raid 6 setup. This is just horrible.
                    When making the filesystem, did you set the correct stride & stripe-width parameters?

                    Comment

                    Working...
                    X