Announcement

Collapse
No announcement yet.

How to time limit / strict loop limit runs?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Yves
    started a topic How to time limit / strict loop limit runs?

    How to time limit / strict loop limit runs?

    Hi there,

    I am trying to benchmark a new software definied storage solution. So I have setup multiple vms on the compute cluster and I am able to run the same benchmark command to all the systems at the same time. I also set the environment variables for the TEST_RESULTS_IDENTIFIER, FORCE_TIMES_TO_RUN=12 but if I run for example phoronix-test-suite benchmark pts/aio-stress on all the vms simultaniously some still to 15 loops some even 40. which of course shifts the results since the load is less if one vm is already finished. Is there a workaround? For fio its even more important that all run for example 5mins rand read 4k then 5min rand write 4k etc. etc.

    Thanks for your AMAZING benchmarking tool!

    Regards,
    Yves

  • ciderdude
    replied

    Yes FORCE_TIMES_TO_RUN=7, did not know there was a MIN setting.

    Leave a comment:


  • Michael
    replied
    Originally posted by ciderdude View Post
    It did 35 runs in total on that particular test and has now moved on to the next iozone test. By the way that WARNING message doesn't seem to stop the benchmarks from running, it they run to completion regardless, perhaps you know of an easy fix for it.
    That warning message is from iozone/cygwin itself and outside of PTS.

    Leave a comment:


  • ciderdude
    replied
    It did 35 runs in total on that particular test and has now moved on to the next iozone test. By the way that WARNING message doesn't seem to stop the benchmarks from running, it they run to completion regardless, perhaps you know of an easy fix for it.

    Leave a comment:


  • Michael
    replied
    [QUOTE=ciderdude;n1151386]Yes, I am running a full set of the iozone benchmark at the moment on one of the Windows VMs and for each test it does an initial 7 trial runs and then goes on to do numerous non trial runs through, see the output below which is already up to run 34 and is still going: -
    /QUOTE]

    Just to confirm, you are using FORCE_TIMES_TO_RUN and not FORCE_MIN_TIMES_TO_RUN?

    Leave a comment:


  • ciderdude
    replied
    Yes, I am running a full set of the iozone benchmark at the moment on one of the Windows VMs and for each test it does an initial 7 trial runs and then goes on to do numerous non trial runs through, see the output below which is already up to run 34 and is still going: -


    IOzone 3.465:
    pts/iozone-1.9.5 [Record Size: 1MB - File Size: 4GB - Disk Test: Write Performance]
    Test 8 of 24
    Estimated Trial Run Count: 7
    Estimated Test Run-Time: 7 Minutes
    Estimated Time To Completion: 1 Hour, 43 Minutes [17:34 GMT]
    Started Run 1 @ 15:51:59 2 [main] iozone 3800 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 2 @ 15:52:24 2 [main] iozone 6580 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 3 @ 15:52:51 2 [main] iozone 540 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 4 @ 15:53:19 2 [main] iozone 2068 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 5 @ 15:53:45 2 [main] iozone 1408 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 6 @ 15:54:11 2 [main] iozone 704 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 7 @ 15:54:35 2 [main] iozone 5680 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 8 @ 15:55:02 * 0 [main] iozone 4576 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 9 @ 15:55:30 * 123 [main] iozone 6420 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 10 @ 15:55:56 * 2 [main] iozone 5568 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 11 @ 15:56:22 * 2 [main] iozone 5552 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 12 @ 15:56:47 * 2 [main] iozone 1752 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 13 @ 15:57:15 * 2 [main] iozone 5604 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 14 @ 15:57:41 * 2 [main] iozone 4936 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 15 @ 15:58:07 * 2 [main] iozone 1408 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 16 @ 15:58:31 * 2 [main] iozone 676 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 17 @ 15:58:57 * 2 [main] iozone 4692 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 18 @ 15:59:24 * 2 [main] iozone 5028 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 19 @ 15:59:49 * 2 [main] iozone 6360 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 20 @ 16:00:15 * 2 [main] iozone 3716 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 21 @ 16:00:41 * 2 [main] iozone 6116 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 22 @ 16:01:08 * 2 [main] iozone 4724 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 23 @ 16:01:34 * 0 [main] iozone 5732 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 24 @ 16:01:58 * 2 [main] iozone 1588 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 25 @ 16:02:23 * 2 [main] iozone 4512 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 26 @ 16:02:52 * 9 [main] iozone 6992 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 27 @ 16:03:17 * 2 [main] iozone 5180 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 28 @ 16:03:42 * 2 [main] iozone 5400 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 29 @ 16:04:07 * 2 [main] iozone 3036 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 30 @ 16:04:33 * 2 [main] iozone 2068 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 31 @ 16:04:59 * 2 [main] iozone 676 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Started Run 32 @ 16:05:26 * 2 [main] iozone 5368 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
    the public mailing list [email protected]

    Leave a comment:


  • Michael
    replied
    Originally posted by ciderdude View Post
    Hi Michael

    I am using PTS 9.2.0.

    Incidentally I have to start all these benchmark tests manually at the clients as the diskspd benchmark does not appear in the list of benchmarks in the server BUI.


    Does this FORCE_TIMES_TO_RUN exceeeding issue happen for other tests besides diskspd? As for the windows tests not appearing in the Phoromatic server GUI, that is an easy fix I can get into Git.

    Leave a comment:


  • ciderdude
    replied
    Hi Michael

    I am using PTS 9.2.0.

    Incidentally I have to start all these benchmark tests manually at the clients as the diskspd benchmark does not appear in the list of benchmarks in the server BUI.



    Leave a comment:


  • Michael
    replied
    Originally posted by ciderdude View Post
    Hi, I have this exact same issue,

    Several VMs testing against a single software defined storage solution, in this case I am using diskspd and trying to run it on 5 VMs simultaneously. Some VMs will carry out numerous runs whereas others only a few. The FORCE_TIMES_TO_RUN variable only seems to define the number of "Trial Runs" that the benchmark will carry out. Some systems only carry out the trial runs, other systems go on to do numerous actual runs (the ones with the *).

    I tried setting FORCE_TIMES_TO_RUN to 7, as you can see from the output below, it carried out 7 Trial Runs (no *), but then went on to do another 21 runs (*). Whereas another machine only ran the 7 Trail Runs and stopped at that point. I'd like to force all the VMs to do the same number of runs.

    Diskspd 2.0.21:
    windows/diskspd-1.1.0 [Threads Per Target: 8 - Write Requests (Percent): 50 - File Size: 2000M - Block Size: 4KB]
    Test 1 of 1
    Estimated Trial Run Count: 7
    Estimated Time To Completion: 14 Minutes [10:51 GMT]
    Started Run 1 @ 10:38:01
    Started Run 2 @ 10:38:44
    Started Run 3 @ 10:39:23
    Started Run 4 @ 10:40:10
    Started Run 5 @ 10:40:49
    Started Run 6 @ 10:41:29
    Started Run 7 @ 10:42:08
    Started Run 8 @ 10:42:47 *
    Started Run 9 @ 10:43:27 *
    Started Run 10 @ 10:44:06 *
    Started Run 11 @ 10:44:45 *
    Started Run 12 @ 10:45:25 *
    Started Run 13 @ 10:46:04 *
    Started Run 14 @ 10:46:44 *
    Started Run 15 @ 10:47:23 *
    Started Run 16 @ 10:48:03 *
    Started Run 17 @ 10:48:43 *
    Started Run 18 @ 10:49:22 *
    Started Run 19 @ 10:50:01 *
    Started Run 20 @ 10:50:41 *
    Started Run 21 @ 10:51:20 *
    Started Run 22 @ 10:52:00 *
    Started Run 23 @ 10:52:39 *
    Started Run 24 @ 10:53:19 *
    Started Run 25 @ 10:53:58 *
    Started Run 26 @ 10:54:37 *
    Started Run 27 @ 10:55:17 *
    Started Run 28 @ 10:55:56 *

    Any help would be much appreciated.

    Thanks



    What pTS version are you on? FORCE_TIMES_TO_RUN should never dynamically increase like that, at least on any recent version....

    Leave a comment:


  • ciderdude
    replied
    Hi, I have this exact same issue,

    Several VMs testing against a single software defined storage solution, in this case I am using diskspd and trying to run it on 5 VMs simultaneously. Some VMs will carry out numerous runs whereas others only a few. The FORCE_TIMES_TO_RUN variable only seems to define the number of "Trial Runs" that the benchmark will carry out. Some systems only carry out the trial runs, other systems go on to do numerous actual runs (the ones with the *).

    I tried setting FORCE_TIMES_TO_RUN to 7, as you can see from the output below, it carried out 7 Trial Runs (no *), but then went on to do another 21 runs (*). Whereas another machine only ran the 7 Trail Runs and stopped at that point. I'd like to force all the VMs to do the same number of runs.

    Diskspd 2.0.21:
    windows/diskspd-1.1.0 [Threads Per Target: 8 - Write Requests (Percent): 50 - File Size: 2000M - Block Size: 4KB]
    Test 1 of 1
    Estimated Trial Run Count: 7
    Estimated Time To Completion: 14 Minutes [10:51 GMT]
    Started Run 1 @ 10:38:01
    Started Run 2 @ 10:38:44
    Started Run 3 @ 10:39:23
    Started Run 4 @ 10:40:10
    Started Run 5 @ 10:40:49
    Started Run 6 @ 10:41:29
    Started Run 7 @ 10:42:08
    Started Run 8 @ 10:42:47 *
    Started Run 9 @ 10:43:27 *
    Started Run 10 @ 10:44:06 *
    Started Run 11 @ 10:44:45 *
    Started Run 12 @ 10:45:25 *
    Started Run 13 @ 10:46:04 *
    Started Run 14 @ 10:46:44 *
    Started Run 15 @ 10:47:23 *
    Started Run 16 @ 10:48:03 *
    Started Run 17 @ 10:48:43 *
    Started Run 18 @ 10:49:22 *
    Started Run 19 @ 10:50:01 *
    Started Run 20 @ 10:50:41 *
    Started Run 21 @ 10:51:20 *
    Started Run 22 @ 10:52:00 *
    Started Run 23 @ 10:52:39 *
    Started Run 24 @ 10:53:19 *
    Started Run 25 @ 10:53:58 *
    Started Run 26 @ 10:54:37 *
    Started Run 27 @ 10:55:17 *
    Started Run 28 @ 10:55:56 *

    Any help would be much appreciated.

    Thanks




    Leave a comment:

Working...
X