The Test Failure Criteria feature enables you to set your test's pass / fail criteria for various metrics, such as response times, errors, hits/s, test duration etc.
If used, the 'pass' / 'failure' results can be seen via the workspace dashboard, making it easier to monitor your testing over time.
Go to the test configuration page and look under the files uploading panel. At the bottom of the optional test configuration list, you'll see the 'Test Failure Criteria' option (see screenshot below).
Setting up failure criteria
There are four parameters that must be set in order for a Failure Criteria to be valid:
- Element Label - Specify here if you want to use this rule on a particular label from your script. It's set to ALL by default.
- KPI - Select the specific metric you'd like to apply a rule on (e.g ResponseTime.avg, ResponseTime.max, errors.percent, hits.count). Use the autocomplete to find more available metrics.
- Comparison - The binary comparison operators for this rule. lt = less than, gt = greater than, eq = Equal to, ne = Not equal .
- Threshold - The numeric value you want this rule to apply to.
- Description - A message that explains the rule. This is not mandatory.
- Stop Test - If this box is checked, the test will stop immediately when that criteria will fail.
- Evaluate last 1-minute sliding window only - By checking this box you will be able to know when certain criteria has failed. Just hover the mouse over the number of violations:
- Ignore Failure Criteria during rampup - This option allows the Failure Criteria mechanism to be functional only when rampup is over and the load has reached the max numebr of users.
Click to APPLY!
Test Failure Criteria Report
In the above example, you can see that the criteria for failure in this rule was an Error percentage of more than 0.1. The Error percentage in this test was 1.0. Therefore, the criteria and test status is 'Failed'.