Announcement

Collapse
No announcement yet.

Benchmarks Of Btrfs RAID On Four Samsung 970 EVO NVMe SSDs

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Benchmarks Of Btrfs RAID On Four Samsung 970 EVO NVMe SSDs

    Phoronix: Benchmarks Of Btrfs RAID On Four Samsung 970 EVO NVMe SSDs

    With the MSI MEG X399 CREATION that we received as part of the launch package for the Threadripper 2950X and Threadripper 2990WX it includes the XPANDER-AERO that provides 4-way M.2 NVMe SSD slots on a PCI Express x16 card. The XPANDER-AERO is actively cooled and could be passed off as a small form factor graphics card upon a very cursory examination. With this card I've been running tests on four Samsung 970 EVO NVMe SSDs in RAID to offer stellar Linux I/O performance. Here are some initial benchmarks using Btrfs.

    http://www.phoronix.com/vr.php?view=26728

  • #2
    The sequential read and write performances are so embarrassingly low that I suspect some systematic error somewhere in the setup.
    After all, one would expect significantly higher values from a single NVMe. When I read from my Samsung SSD 960 EVO 250GB NVMe storage, I get 1200 MB/s reading sequentially from files - on an encrypted block device!

    Comment


    • #3
      Originally posted by dwagner View Post
      The sequential read and write performances are so embarrassingly low that I suspect some systematic error somewhere in the setup.
      After all, one would expect significantly higher values from a single NVMe. When I read from my Samsung SSD 960 EVO 250GB NVMe storage, I get 1200 MB/s reading sequentially from files - on an encrypted block device!
      I believe it's something to do with Btrfs. Again, this article was basically "here's the current results, more pending." I've seen similar oddities on Btrfs in the past.
      Michael Larabel
      http://www.michaellarabel.com/

      Comment


      • #4
        I have a question. What is the brand of PCIe power cable you use in benchmarks? They seem to be pretty heavy duty.

        Comment


        • #5
          Originally posted by tildearrow View Post
          I have a question. What is the brand of PCIe power cable you use in benchmarks? They seem to be pretty heavy duty.
          Just the power cable included with the Corsair AX860i/AX760i. With not reviewing power supplies only aside from once or so every few years, the power supplies I usually pickup for the high-end systems (e.g. Threadripper / Core i9) are usually the Corsair AX or HX series due to their dual 8-pin, good features for price, etc. Been very happy with those Corsair power supplies for the higher-end systems; for the more conventional systems are using mostly the budget Corsair/EVGA PSUs.

          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Originally posted by dwagner View Post
            The sequential read and write performances are so embarrassingly low that I suspect some systematic error somewhere in the setup.
            After all, one would expect significantly higher values from a single NVMe. When I read from my Samsung SSD 960 EVO 250GB NVMe storage, I get 1200 MB/s reading sequentially from files - on an encrypted block device!
            Or... just the fact that Btrfs, to this day, is nothing more than a messy patchwork.

            Comment


            • #7
              Would it be possible to do a historic comparison of Btrfs perfomance across older and newer kernels? Might be interesting to see if performance actually improved.

              Comment


              • #8
                Haha I meanwhile am just interested in where I can get my hands on the XPANDER card without buying the motherboard. That thing intrigues me.

                Comment


                • #9
                  Originally posted by Michael View Post

                  I believe it's something to do with Btrfs. Again, this article was basically "here's the current results, more pending." I've seen similar oddities on Btrfs in the past.
                  Would be nice to have some md raid / zfs data as a reference. It could also be that the hardware disk controller won't scale. RAID-0 should be 2x as fast as a single disk. Back in the day, RAID-5 with 5 disks offered 4 to 5 times the I/O bandwidth of a single disk.

                  Comment


                  • #10
                    Originally posted by dwagner View Post
                    The sequential read and write performances are so embarrassingly low that I suspect some systematic error somewhere in the setup.
                    After all, one would expect significantly higher values from a single NVMe. When I read from my Samsung SSD 960 EVO 250GB NVMe storage, I get 1200 MB/s reading sequentially from files - on an encrypted block device!
                    I also had similar consistent results from my benchmarks on single btrfs drives and RAID1 ones. I tested with AHCI and NVMe storage, but mostly with compression enabled which gave me significant increase in I/O performance. I find it also strange that these PTS tests show strange results like these, I think almost every time? I'd do my tests again but I don't have the same setups anymore, all I can do now is a single disk benchmark to compare with these results.

                    Michael You should specify noatime explicitely next time you do btrfs benchmarks. It's supposed to have a relevant impact on performance. Interesting two-minute read about this: https://gms.tf/btrfs-requires-noatime.html
                    Last edited by EarthMind; 08-19-2018, 11:50 AM.

                    Comment

                    Working...
                    X