The Failure Criteria feature allows you to set your test's pass / fail criteria for various metrics, such as response times, errors, hits/s, test duration etc.
Just click the slider to make it appear:
There following parameters are available:
- Label - Specify here if you want to use this rule on a particular label from your script. It's set to "ALL" (all labels) by default.
- Key Performance Indicator - Select the specific metric you'd like to apply a rule for. Click the down arrow on the right side of the field to open a drop-down menu and review available metrics to monitor.
- Condition - The binary comparison operators for this rule, which include "Less than", "Greater than", "Equal to", and "Not Equal to". Click the down arrow on the right side of the field to open a drop-down menu.
- Threshold - The numeric value you want this rule to apply to.
- Stop Test - If this box is checked, the test will stop immediately when that criteria will fail; otherwise, it will continue running uninterrupted.
- Delete Failure Criteria - Clicking this trash bin icon will delete the criteria.
The Advanced configuration drop-down provides additional options:
- Evaluate last 1-minute sliding window only - By checking this box you will be able to know exactly when certain criteria has failed by having red markups on the load report in a one-minute granularity.
- Ignore failure criteria during rampup - Self-explanatory, checking this box causes the test to ignore any failures that occur during ramp-up, so that criteria are only evaluated after ramp-up ends.
If implemented, the pass / failure results can be seen via the workspace dashboard, making it easier to monitor your testing over time.
Note: For Taurus tests (tests that use a YAML configuration file), please be aware that BlazeMeter's Failure Criteria is an entirely different feature apart from Taurus's own Pass/Fail Criteria. The two have no relation.
Jump to next section: