Announcement

Collapse
No announcement yet.

How to time limit / strict loop limit runs?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    doing a run right now on a new vm... the other vms are still going nuts... 43 run now :-)

    this one (ran 45) times is finished if this helps you https://openbenchmarking.org/result/...AS-RANDREAD407

    Comment


    • #12
      P.s.if you look at the run you see that iops only ran twice....

      might it be because the systems are cloned?
      Last edited by Yves; 11 October 2019, 06:41 AM.

      Comment


      • #13
        Code:
        phoronix-test-suite debug-benchmark pts/fio
        
        
        
        Phoronix Test Suite v9.0.1
        
            Installed:     pts/fio-1.12.0
        
        
        Flexible IO Tester 3.16:
        pts/fio-1.12.0
        Disk Test Configuration
        1: Random Read
        2: Random Write
        3: Sequential Read
        4: Sequential Write
        5: Test All Options
        ** Multiple items can be selected, delimit by a comma. **
        Type: 1,2
        
        
        1: POSIX AIO
        2: Sync
        3: Linux AIO
        4: Windows AIO
        5: Test All Options
        ** Multiple items can be selected, delimit by a comma. **
        IO Engine: 3
        
        
        1: Yes
        2: No
        3: Test All Options
        ** Multiple items can be selected, delimit by a comma. **
        Buffered: 2
        
        
        1: No
        2: Yes
        3: Test All Options
        ** Multiple items can be selected, delimit by a comma. **
        Direct: 2
        
        
        1:  4KB
        2:  8KB
        3:  16KB
        4:  32KB
        5:  64KB
        6:  128KB
        7:  256KB
        8:  512KB
        9:  1MB
        10: 2MB
        11: 4MB
        12: 8MB
        13: Test All Options
        ** Multiple items can be selected, delimit by a comma. **
        Block Size: 5
        
        
        1: Default Test Directory
        2: /
        3: Test All Options
        ** Multiple items can be selected, delimit by a comma. **
        Disk Target: 1
        
        
        ========================================
        Phoronix Test Suite v9.0.1
        System Information
        ========================================
        
        
          PROCESSOR:          4 x Intel Xeon E312xx
            Core Count:       4
            Extensions:       SSE 4.2 + AVX
            Cache Size:       4096 KB
            Microcode:        0x1
        
          GRAPHICS:           Red Hat QXL paravirtual graphic card
            Screen:           1024x768
        
          MOTHERBOARD:        oVirt Node
            BIOS Version:     1.11.0-2.el7
            Chipset:          Intel 440FX 82441FX PMC
            Network:          Red Hat Virtio device
        
          MEMORY:             1 x 4096 MB RAM
        
          DISK:               107GB QEMU HDD + 11GB QEMU HDD
            File-System:      ext4
            Mount Options:    relatime rw seclabel
            Disk Scheduler:   MQ-DEADLINE
        
          OPERATING SYSTEM:   Fedora 30
            Kernel:           5.0.9-301.fc30.x86_64 (x86_64)
            Desktop:          GNOME Shell 3.32.1
            Display Server:   X Server
            Compiler:         GCC 9.0.1 20190312
            System Layer:     KVM
            Security:         SELinux
                              + l1tf: Mitigation of PTE Inversion
                              + meltdown: Mitigation of PTI
                              + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp
                              + spectre_v1: Mitigation of __user pointer sanitization
                              + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling
        
        
        Running Pre-Test Script
        /var/lib/phoronix-test-suite/test-profiles/pts/fio-1.12.0/pre.sh: line 3: cd: fio-3.1/: No such file or directory
        
        ========================================
        Flexible IO Tester (Run 1 of 1)
        ========================================
        
        
        Test Run Command: cd /var/lib/phoronix-test-suite/installed-tests/pts/fio-1.12.0/ && ./fio-run randread libaio 0 1 64k 2>&1
        
        test: (g=0): rw=randread, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=64
        fio-3.16
        Starting 1 process
        
        test: (groupid=0, jobs=1): err= 0: pid=22607: Fri Oct 11 18:32:24 2019
          read: IOPS=10.2k, BW=639MiB/s (670MB/s)(12.5GiB/20005msec)
           bw (  KiB/s): min=573568, max=712448, per=99.97%, avg=654107.23, stdev=27331.86, samples=40
           iops        : min= 8962, max=11132, avg=10220.45, stdev=427.11, samples=40
          cpu          : usr=2.95%, sys=13.47%, ctx=7894, majf=0, minf=15
          IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
             submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
             complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
             issued rwts: total=204463,0,0,0 short=0,0,0,0 dropped=0,0,0,0
             latency   : target=0, window=0, percentile=100.00%, depth=64
        
        Run status group 0 (all jobs):
           READ: bw=639MiB/s (670MB/s), 639MiB/s-639MiB/s (670MB/s-670MB/s), io=12.5GiB (13.4GB), run=20005-20005msec
        
        Result Key: #_RESULT_#
        
        
        Template Line:    READ: bw=#_RESULT_# (861MB/s), 821MiB/s-821MiB/s (861MB/s-861MB/s), io=16.4GiB (17.3GB), run=20002-20002msec
        
        
        Result Parsing Search Key: ": bw="
        
        
        Result Line:    READ: bw=639MiB/s (670MB/s), 639MiB/s-639MiB/s (670MB/s-670MB/s), io=12.5GiB (13.4GB), run=20005-20005msec
        
        
        No Test Results
        
        
        Result Key: #_RESULT_#
        
        
        Template Line:    READ: bw=#_RESULT_# (861MB/s), 821MiB/s-821MiB/s (861MB/s-861MB/s), io=16.4GiB (17.3GB), run=20002-20002msec
        
        
        Result Parsing Search Key: ": bw="
        
        
        Result Line:    READ: bw=639MiB/s (670MB/s), 639MiB/s-639MiB/s (670MB/s-670MB/s), io=12.5GiB (13.4GB), run=20005-20005msec
        
        
        Test Result Parser Returning: 639
        
        
        Result Key: #_RESULT_#
        
        
        Template Line:   write: IOPS=#_RESULT_# BW=891MiB/s (934MB/s)(17.5GiB/20006msec)
        
        
        Result Parsing Search Key: "IOPS="
        
        
        Result Line:   read: IOPS=10.2k, BW=639MiB/s (670MB/s)(12.5GiB/20005msec)
        
        
        Test Result Parser Returning: 10200
        
        
        Log File At: /var/lib/phoronix-test-suite/installed-tests/pts/fio-1.12.0/fio-1.12.0-1570811511-1.log
        
        
        Running Post-Test Script
        /var/lib/phoronix-test-suite/test-profiles/pts/fio-1.12.0/post.sh: line 3: cd: fio-3.1/: No such file or directory
        
        ##############################################################################################################################
        Flexible IO Tester:
        Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directory
        
        639 MB/s
        
        Average: 639 MB/s
        ##############################################################################################################################
        
        
        ##############################################################################################################################
        Flexible IO Tester:
        Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directory
        
        10200 IOPS
        
        Average: 10200 IOPS
        ##############################################################################################################################
        
        
        Running Pre-Test Script
        /var/lib/phoronix-test-suite/test-profiles/pts/fio-1.12.0/pre.sh: line 3: cd: fio-3.1/: No such file or directory
        
        ========================================
        Flexible IO Tester (Run 1 of 1)
        ========================================
        
        
        Test Run Command: cd /var/lib/phoronix-test-suite/installed-tests/pts/fio-1.12.0/ && ./fio-run randwrite libaio 0 1 64k 2>&1
        
        test: (g=0): rw=randwrite, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=64
        fio-3.16
        Starting 1 process
        
        test: (groupid=0, jobs=1): err= 0: pid=22672: Fri Oct 11 18:33:06 2019
          write: IOPS=5484, BW=343MiB/s (360MB/s)(6863MiB/20008msec); 0 zone resets
           bw (  KiB/s): min=330347, max=380544, per=99.95%, avg=351064.78, stdev=10822.28, samples=40
           iops        : min= 5161, max= 5946, avg=5485.28, stdev=169.15, samples=40
          cpu          : usr=2.53%, sys=6.65%, ctx=4096, majf=0, minf=15
          IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
             submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
             complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
             issued rwts: total=0,109741,0,0 short=0,0,0,0 dropped=0,0,0,0
             latency   : target=0, window=0, percentile=100.00%, depth=64
        
        Run status group 0 (all jobs):
          WRITE: bw=343MiB/s (360MB/s), 343MiB/s-343MiB/s (360MB/s-360MB/s), io=6863MiB (7196MB), run=20008-20008msec
        
        Result Key: #_RESULT_#
        
        
        Template Line:    READ: bw=#_RESULT_# (861MB/s), 821MiB/s-821MiB/s (861MB/s-861MB/s), io=16.4GiB (17.3GB), run=20002-20002msec
        
        
        Result Parsing Search Key: ": bw="
        
        
        Result Line:   WRITE: bw=343MiB/s (360MB/s), 343MiB/s-343MiB/s (360MB/s-360MB/s), io=6863MiB (7196MB), run=20008-20008msec
        
        
        No Test Results
        
        
        Result Key: #_RESULT_#
        
        
        Template Line:    READ: bw=#_RESULT_# (861MB/s), 821MiB/s-821MiB/s (861MB/s-861MB/s), io=16.4GiB (17.3GB), run=20002-20002msec
        
        
        Result Parsing Search Key: ": bw="
        
        
        Result Line:   WRITE: bw=343MiB/s (360MB/s), 343MiB/s-343MiB/s (360MB/s-360MB/s), io=6863MiB (7196MB), run=20008-20008msec
        
        
        Test Result Parser Returning: 343
        
        
        Result Key: #_RESULT_#
        
        
        Template Line:   write: IOPS=#_RESULT_# BW=891MiB/s (934MB/s)(17.5GiB/20006msec)
        
        
        Result Parsing Search Key: "IOPS="
        
        
        Result Line:   write: IOPS=5484, BW=343MiB/s (360MB/s)(6863MiB/20008msec); 0 zone resets
        
        
        No Test Results
        
        
        Log File At: /var/lib/phoronix-test-suite/installed-tests/pts/fio-1.12.0/fio-1.12.0-1570811553-1.log
        
        
        Running Post-Test Script
        /var/lib/phoronix-test-suite/test-profiles/pts/fio-1.12.0/post.sh: line 3: cd: fio-3.1/: No such file or directory
        
        ###############################################################################################################################
        Flexible IO Tester:
        Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directory
        
        343 MB/s
        
        Average: 343 MB/s
        ###############################################################################################################################

        Comment


        • #14
          Hi, I have this exact same issue,

          Several VMs testing against a single software defined storage solution, in this case I am using diskspd and trying to run it on 5 VMs simultaneously. Some VMs will carry out numerous runs whereas others only a few. The FORCE_TIMES_TO_RUN variable only seems to define the number of "Trial Runs" that the benchmark will carry out. Some systems only carry out the trial runs, other systems go on to do numerous actual runs (the ones with the *).

          I tried setting FORCE_TIMES_TO_RUN to 7, as you can see from the output below, it carried out 7 Trial Runs (no *), but then went on to do another 21 runs (*). Whereas another machine only ran the 7 Trail Runs and stopped at that point. I'd like to force all the VMs to do the same number of runs.

          Diskspd 2.0.21:
          windows/diskspd-1.1.0 [Threads Per Target: 8 - Write Requests (Percent): 50 - File Size: 2000M - Block Size: 4KB]
          Test 1 of 1
          Estimated Trial Run Count: 7
          Estimated Time To Completion: 14 Minutes [10:51 GMT]
          Started Run 1 @ 10:38:01
          Started Run 2 @ 10:38:44
          Started Run 3 @ 10:39:23
          Started Run 4 @ 10:40:10
          Started Run 5 @ 10:40:49
          Started Run 6 @ 10:41:29
          Started Run 7 @ 10:42:08
          Started Run 8 @ 10:42:47 *
          Started Run 9 @ 10:43:27 *
          Started Run 10 @ 10:44:06 *
          Started Run 11 @ 10:44:45 *
          Started Run 12 @ 10:45:25 *
          Started Run 13 @ 10:46:04 *
          Started Run 14 @ 10:46:44 *
          Started Run 15 @ 10:47:23 *
          Started Run 16 @ 10:48:03 *
          Started Run 17 @ 10:48:43 *
          Started Run 18 @ 10:49:22 *
          Started Run 19 @ 10:50:01 *
          Started Run 20 @ 10:50:41 *
          Started Run 21 @ 10:51:20 *
          Started Run 22 @ 10:52:00 *
          Started Run 23 @ 10:52:39 *
          Started Run 24 @ 10:53:19 *
          Started Run 25 @ 10:53:58 *
          Started Run 26 @ 10:54:37 *
          Started Run 27 @ 10:55:17 *
          Started Run 28 @ 10:55:56 *

          Any help would be much appreciated.

          Thanks




          Comment


          • #15
            Originally posted by ciderdude View Post
            Hi, I have this exact same issue,

            Several VMs testing against a single software defined storage solution, in this case I am using diskspd and trying to run it on 5 VMs simultaneously. Some VMs will carry out numerous runs whereas others only a few. The FORCE_TIMES_TO_RUN variable only seems to define the number of "Trial Runs" that the benchmark will carry out. Some systems only carry out the trial runs, other systems go on to do numerous actual runs (the ones with the *).

            I tried setting FORCE_TIMES_TO_RUN to 7, as you can see from the output below, it carried out 7 Trial Runs (no *), but then went on to do another 21 runs (*). Whereas another machine only ran the 7 Trail Runs and stopped at that point. I'd like to force all the VMs to do the same number of runs.

            Diskspd 2.0.21:
            windows/diskspd-1.1.0 [Threads Per Target: 8 - Write Requests (Percent): 50 - File Size: 2000M - Block Size: 4KB]
            Test 1 of 1
            Estimated Trial Run Count: 7
            Estimated Time To Completion: 14 Minutes [10:51 GMT]
            Started Run 1 @ 10:38:01
            Started Run 2 @ 10:38:44
            Started Run 3 @ 10:39:23
            Started Run 4 @ 10:40:10
            Started Run 5 @ 10:40:49
            Started Run 6 @ 10:41:29
            Started Run 7 @ 10:42:08
            Started Run 8 @ 10:42:47 *
            Started Run 9 @ 10:43:27 *
            Started Run 10 @ 10:44:06 *
            Started Run 11 @ 10:44:45 *
            Started Run 12 @ 10:45:25 *
            Started Run 13 @ 10:46:04 *
            Started Run 14 @ 10:46:44 *
            Started Run 15 @ 10:47:23 *
            Started Run 16 @ 10:48:03 *
            Started Run 17 @ 10:48:43 *
            Started Run 18 @ 10:49:22 *
            Started Run 19 @ 10:50:01 *
            Started Run 20 @ 10:50:41 *
            Started Run 21 @ 10:51:20 *
            Started Run 22 @ 10:52:00 *
            Started Run 23 @ 10:52:39 *
            Started Run 24 @ 10:53:19 *
            Started Run 25 @ 10:53:58 *
            Started Run 26 @ 10:54:37 *
            Started Run 27 @ 10:55:17 *
            Started Run 28 @ 10:55:56 *

            Any help would be much appreciated.

            Thanks



            What pTS version are you on? FORCE_TIMES_TO_RUN should never dynamically increase like that, at least on any recent version....
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #16
              Hi Michael

              I am using PTS 9.2.0.

              Incidentally I have to start all these benchmark tests manually at the clients as the diskspd benchmark does not appear in the list of benchmarks in the server BUI.



              Comment


              • #17
                Originally posted by ciderdude View Post
                Hi Michael

                I am using PTS 9.2.0.

                Incidentally I have to start all these benchmark tests manually at the clients as the diskspd benchmark does not appear in the list of benchmarks in the server BUI.


                Does this FORCE_TIMES_TO_RUN exceeeding issue happen for other tests besides diskspd? As for the windows tests not appearing in the Phoromatic server GUI, that is an easy fix I can get into Git.
                Michael Larabel
                https://www.michaellarabel.com/

                Comment


                • #18
                  Yes, I am running a full set of the iozone benchmark at the moment on one of the Windows VMs and for each test it does an initial 7 trial runs and then goes on to do numerous non trial runs through, see the output below which is already up to run 34 and is still going: -


                  IOzone 3.465:
                  pts/iozone-1.9.5 [Record Size: 1MB - File Size: 4GB - Disk Test: Write Performance]
                  Test 8 of 24
                  Estimated Trial Run Count: 7
                  Estimated Test Run-Time: 7 Minutes
                  Estimated Time To Completion: 1 Hour, 43 Minutes [17:34 GMT]
                  Started Run 1 @ 15:51:59 2 [main] iozone 3800 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 2 @ 15:52:24 2 [main] iozone 6580 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 3 @ 15:52:51 2 [main] iozone 540 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 4 @ 15:53:19 2 [main] iozone 2068 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 5 @ 15:53:45 2 [main] iozone 1408 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 6 @ 15:54:11 2 [main] iozone 704 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 7 @ 15:54:35 2 [main] iozone 5680 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 8 @ 15:55:02 * 0 [main] iozone 4576 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 9 @ 15:55:30 * 123 [main] iozone 6420 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 10 @ 15:55:56 * 2 [main] iozone 5568 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 11 @ 15:56:22 * 2 [main] iozone 5552 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 12 @ 15:56:47 * 2 [main] iozone 1752 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 13 @ 15:57:15 * 2 [main] iozone 5604 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 14 @ 15:57:41 * 2 [main] iozone 4936 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 15 @ 15:58:07 * 2 [main] iozone 1408 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 16 @ 15:58:31 * 2 [main] iozone 676 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 17 @ 15:58:57 * 2 [main] iozone 4692 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 18 @ 15:59:24 * 2 [main] iozone 5028 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 19 @ 15:59:49 * 2 [main] iozone 6360 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 20 @ 16:00:15 * 2 [main] iozone 3716 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 21 @ 16:00:41 * 2 [main] iozone 6116 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 22 @ 16:01:08 * 2 [main] iozone 4724 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 23 @ 16:01:34 * 0 [main] iozone 5732 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 24 @ 16:01:58 * 2 [main] iozone 1588 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 25 @ 16:02:23 * 2 [main] iozone 4512 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 26 @ 16:02:52 * 9 [main] iozone 6992 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 27 @ 16:03:17 * 2 [main] iozone 5180 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 28 @ 16:03:42 * 2 [main] iozone 5400 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 29 @ 16:04:07 * 2 [main] iozone 3036 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 30 @ 16:04:33 * 2 [main] iozone 2068 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 31 @ 16:04:59 * 2 [main] iozone 676 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Started Run 32 @ 16:05:26 * 2 [main] iozone 5368 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
                  the public mailing list [email protected]

                  Comment


                  • #19
                    [QUOTE=ciderdude;n1151386]Yes, I am running a full set of the iozone benchmark at the moment on one of the Windows VMs and for each test it does an initial 7 trial runs and then goes on to do numerous non trial runs through, see the output below which is already up to run 34 and is still going: -
                    /QUOTE]

                    Just to confirm, you are using FORCE_TIMES_TO_RUN and not FORCE_MIN_TIMES_TO_RUN?
                    Michael Larabel
                    https://www.michaellarabel.com/

                    Comment


                    • #20
                      It did 35 runs in total on that particular test and has now moved on to the next iozone test. By the way that WARNING message doesn't seem to stop the benchmarks from running, it they run to completion regardless, perhaps you know of an easy fix for it.

                      Comment

                      Working...
                      X