Announcement

Collapse
No announcement yet.

Test run number does not correlate to deviation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Michael
    replied
    Short (time) running tests will run up to 5x the default run count if necessary to obtain a more accurate result. However, after 3~4x it will re-evaluate the deviation. If the deviation isn't tightening or the results otherwise appearing sporadic where it's unlikely to provide a more accurate result, it will bail out early. This is likely what you are seeing with the run is it exiting early since it looks like it can't obtain a more accurate result.

    Leave a comment:


  • mateusz_bl
    started a topic Test run number does not correlate to deviation

    Test run number does not correlate to deviation

    Hi

    I've been running Redis benchmark recently and I've found out that there's something weird with the Run count. From what I found out, Run count is usually set to 3, unless the deviation is high - then the test is repeated (no more than X times).

    In my case usually each Redis test (like GET, SET) is executed 15 times (so I guess that's the maximum Run count). That would be OK if I didn't notice that sometimes the test is executed less than 15 times but with a high deviation as well. Two examples:

    1. Test with pretty low (3.77%) deviation; executed 15 times:
    Code:
    Redis 6.0.9:
    pts/redis-1.3.1 [Test: LPOP]
    Test 1 of 5
    Estimated Trial Run Count: 3
    Estimated Test Run-Time: 1 Minute
    Estimated Time To Completion: 5 Minutes [05:08 PDT]
    Started Run 1 @ 05:03:35
    ...
    Started Run 15 @ 05:08:38 *
    
    Test: LPOP:
    1352482.5
    1527436.75
    1544173
    1518607.75
    1440539.38
    1547039.5
    1436016
    1464175.75
    1523002.12
    1563477.25
    1511287.88
    1499724
    1501050.75
    1541787.25
    1550682.62
    
    Average: 1501432.17 Requests Per Second
    Deviation: 3.77%
    Samples: 15
    2. Test with high (7.43%) deviation; executed only 12 times:
    Code:
    Redis 6.0.9:
    pts/redis-1.3.1 [Test: GET]
    Test 4 of 5
    Estimated Trial Run Count: 3
    Estimated Test Run-Time: 1 Minute
    Estimated Time To Completion: 2 Minutes [16:22 UTC]
    Started Run 1 @ 16:21:07
    ...
    Started Run 12 @ 16:24:43 *
    
    Test: GET:
    2108841.75
    1970863.25
    2158901.5
    1975906.75
    2194447.25
    2176292.5
    2145050
    2106593.5
    2074728.5
    1732501.75
    1774308
    2074258.38
    
    Average: 2041057.76 Requests Per Second
    Deviation: 7.43%
    Samples: 12

    My question is: why in the first example the test was repeated so many times if in the second example (with higher deviation) the test stopped after 12 runs? How is this calculated?

    I was using Phoronix Test Suite v10.2.2. I didn't change any default configuration variables (like FORCE_TIMES_TO_RUN). And I didn't create user-config.xml file as well.

    Thanks!
Working...
X