Announcement

Collapse
No announcement yet.

When will the Phoronix Test Suite pts/fio run more samples?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • When will the Phoronix Test Suite pts/fio run more samples?

    I'm trying to do some NFSoRDMA testing and my headnode has a RAID0 array of four Samsung 860 EVO 1 TB SATA 6 Gbps SSDs connected to a Broadcom/Avago/LST MegaRAID 12 Gbps SAS HW RAID HBA.

    I've got 4x EDR Infiniband (Mellanox ConnectX-4 dual port 100 Gbps NIC) along with IPoIB and RDMA configured. The RAID array is formatted with XFS, and all nodes are running CentOS 7.6.1810.

    The RAID array is presented to the network and the interconnect using NFS (more specifically, NFSoRDMA) and that too, has also been configured already.

    I've installed the test suite on both the head node and also on one of the slave nodes in my micro cluster, and I've edited /usr/share/phoronix-test-suite/pts-core/static/user-config-defaults.xml so that the EnvironmentDirectory variable is pointing to the NFSoRDMA mount point.

    When I run pts/fio, sequential write test, both buffered and unbuffered, direct and indirect, block size of 64 kiB, Linux AIO, the test state that the estimated trial run count is 3, but now, it's running anywhere between 12 and 15.

    My question is: under what circumstances will it automatically increase the trial run count like that?

    Thank you.

  • #2
    The short answer is when the standard deviation is above 3.5% it will increase the dynamic run-count up to 5x (for short running tests) to try to get a more accurate result.

    There are some tunables and such, but that is the basic behavior is if too much variation increase the run count to try to get a more reliable figure.
    Michael Larabel
    http://www.michaellarabel.com/

    Comment


    • #3
      Thank you.

      Comment


      • #4
        Originally posted by Michael View Post
        The short answer is when the standard deviation is above 3.5% it will increase the dynamic run-count up to 5x (for short running tests) to try to get a more accurate result.

        There are some tunables and such, but that is the basic behavior is if too much variation increase the run count to try to get a more reliable figure.
        @Michael

        Would you mind being my gut-check/fresh eyes review on my data?

        So here is my micro cluster configuration:

        Headnode:
        Asus P9X79-E WS, Intel Core i7-4930K (6-cores, HTT disabled), 8x Crucial 8GB Ballistix Sport DDR3-1600 Non-ECC, Unbuffered 9-9-9-24 RAM, 1x Intel 535s Series 240 GB SATA 6 Gbps SSD, 4x Samsung 860 EVO 1 TB SATA 6 Gbps SSD (on RAID0 on LSI MegaRAID 9341-8i 12 Gbps SAS HW RAID HBA), 4x HGST 6 TB Ultrastar 7200 rpm SATA 6 Gbps HDD (also on RAID0 on LSI MegaRAID 9341-8i), eVGA GTX Titan 6 GB (I think?), Mellanox ConnectX-4 dual port 4x EDR Infiniband NIC, CentOS 7.6.1810

        Slave nodes:
        Supermicro 6027TR-HTRF, Supermicro X9DRT, 2x Intel Xeon E5-2690 (v1) (8-cores, HTT disabled), 8x Micron 16GB DDR3-1866 ECC Registered 4RX4 RAM (running at DDR3-1600 due to it being 4R), 1x Intel 540s Series 1 TB SATA 6 Gbps SSD, 1x HGST 3TB 7200 rpm SATA 6 Gbps HDD, Matrox G200e, Mellanox ConnectX-4 dual port 4x EDR IB NIC, CentOS 7.6.1810

        IB Switch:
        Mellanox MSB-7890 36-port externally managed 4x EDR IB switch. Headnode runs OpenSM.

        When I test the 4x Samsung 860 EVO 1 TB (4 TB total) in RAID0 on the LSI MegaRAID 9341-8i, on the local host, this is what I get:

        pts/fio
        sequential write test only
        Linux AIO
        both buffered and unbuffered
        both direct and indirect I/O
        64 kiB block size

        Sequential Write speed of four Samsung 860 EVO in RAID0:
        unbuffered, indirect I/O
        211 MB/s

        unbuffered, direct I/O
        210 MB/s

        buffered, indirect I/O
        5263 MB/s

        buffered, direct I/O
        5220 MB/s


        When I test the same array over NFSoRDMA, this is what I get:
        Sequential Write speed of four Samsung 860 EVO in RAID0:
        unbuffered, indirect I/O
        4320 MB/s, an average of 68467 IOPS

        unbuffered, direct I/O
        4294 MB/s, an average of 68700 IOPS

        buffered, indirect I/O
        4032 MB/s, an average of 64500 IOPS

        buffered, direct I/O
        3970 MB/s, an average of 63500 IOPS

        My two questions:

        1) How does the Phoronix Test Suite handle the sequential write test on a RAID0 SSD array? The local host results of 210-211 MB/s seems unusually low as it is well below that of a single Samsung 860 EVO 1 TB SATA 6 Gbps SSD.

        2) Why does the NFSoRDMA seem high for the unbuffered runs, but seem "low" for the buffered runs given that the local host can achieve a faster sequential write result? At 5263 MB/s and 5220 MB/s, that works out to be 42 and 41 Gbps respectively, which surely my 100 Gbps 4x EDR Infiniband interconnect should be able to handle, but it only uses around 32 Gbps when the same benchmark/test was performed over NFSoRDMA.

        If you can provide your insight and expertise in regards to how the Phoronix Test Suite operates and what I might be able to do in order to try and tune/dial in the results between the two testing paradigms (local host vs. NFSoRDMA), that would be greatly appreciated.

        Thank you.

        Comment


        • #5
          Maybe perhaps it is because my above post with the results is currently unapproved, which is why I'm not able to edit it, but the edit that I wanted to add is that with the local host tests, the buffered test with indirect and direct I/O has an average IOPS counts of 84233 and 83500 respectively.

          My understanding, in regards to Linux mount points is that if it doesn't specify in the mount options, then async should be the default.

          I'm also not sure how Linux handled RAID0 arrays in regards to IOPS management in conjunction with NFSoRDMA IOPS management.

          Comment

          Working...
          X