The Failure Criteria feature allows you to set your test's pass / fail criteria for various metrics, such as response times, errors, hits/s, test duration, and so on.
Enable and Define Failure Criteria
To enable failure criteria, toggle the button on:
The following parameters are available:
- Label - Specify here if you want to use this rule on a particular label from your script. It's set to "ALL" (all labels) by default.
- Key Performance Indicator - Select the specific metric you'd like to apply a rule for. Click the down arrow on the right side of the field to open a drop-down menu and review available metrics to monitor.
- Condition - The binary comparison operators for this rule, which include "Less than", "Greater than", "Equal to", and "Not Equal to". Click the down arrow on the right side of the field to open a drop-down menu.
- Threshold - The numeric value you want this rule to apply to.
- Stop Test - If this box is checked, the test will stop immediately when that criteria will fail; otherwise, it will continue running uninterrupted.
- Delete Failure Criteria - Clicking this trash bin icon will delete the criteria.
The Advanced configuration drop-down provides additional options:
- Evaluate last 1-minute sliding window only - By checking this box you will be able to know exactly when certain criteria has failed by having red markups on the load report in a one-minute granularity.
- Ignore failure criteria during rampup - Self-explanatory, checking this box causes the test to ignore any failures that occur during ramp-up, so that criteria are only evaluated after ramp-up ends.
If implemented, the pass / failure results can be seen via the workspace dashboard, making it easier to monitor your testing over time.
Note: For Taurus tests (tests that use a YAML configuration file), we are translating the majority of Taurus pass/fail capabilities into Blazemeter’s Failure Criteria, so that when a YAML script is uploaded to Blazemeter, the pass/fail module in the script will automatically appear in the test UI. You can also execute a test from Taurus with cloud provisioning, and the pass/fail module will be recognized by BlazeMeter and displayed in the report.
Define Failure Criteria Using the Baseline
When you set a report as a baseline, you can also set the failure criteria against the baseline. If performance degrades compared to the baseline in a subsequent test, the test will fail automatically.
Follow these steps:
- Go to the Performance tab, click Tests and select a test from the drop down list.
The test reports window opens.
- Click the Configuration tab and scroll down to the Failure Criteria section.
- Toggle the button for Failure Criteria on.
- Check the box for Use from baseline. This checkbox is available only if there is a baseline selected for the test.
- You can:
- Set the threshold for each failure criteria from the selected baseline.
- Define an offset from baseline you want to tolerate, so that minor deviation from baseline will not be defined as a failure.
If you use Baseline to define the threshold, the threshold will be visible after save.
Subsequent tests are compared against the baseline with these failure criteria. If performance degrades compared to the baseline and the defined offset, the test will fail automatically.
Jump to next section: