BlazeMeter's API Functional Test allows you to test whether your APIs work as expected.
- Create a Test
- Upload an existing script (e.g. JMeter)
- Use to UI to create a test
- Write a test script in BlazeMeter
- Recording a test
- Create test from API specification / Import Swagger file
- If you already have JMeter scripts or Taurus YAML scripts, then simply upload and run them.
(Note: If you need to whitelist the IPs for the locations available, please visit this article.)
Create a Test
To create a new test of the type "API Functional Test", follow these steps:
1.Crate a new Test using the "Create Test" button
2.Choose "API Functional Test"
You can define your scenario directly in the UI...
... or upload an existing test script (JMeter, Taurus YAML).
(Note: One of the two options above may be chosen, but the two cannot be mixed; in other words, you cannot upload an existing JMX script then edit it via the UI.)
Upload an Existing Script to Create an API Functional Test
1. Click the menu at bottom of the left sidebar
2. Click "Upload existing test script (e.g. JMeter)"
(Note: If you already created a scenario in the UI that scenario will not be preserved once you switch to upload existing scripts mode)
Use the UI to Create an API Functional Test
Click on "Request Name" and enter a description of what the API call does. This description will also show up in the report.
Enter a valid URL (or an endpoint if you are using the default address field in scenario configuration). Change the request method in the drop down:
Request chaining - use value from a response in a subsequent request
To use a value from a response in a subsequent request, use the "Extract from response" option. For example, if your request creates a user and the server returns the user ID in the response then you can then save that ID in a variable and use it in another request to change or delete that user.
button in the variable name field to copy the variable name including the right syntax, then simply paste it using cmd/ctrl + v in the next request:
Creating multiple scenarios
An API Functional Test can contain multiple scenarios. To create an additional scenario please open the menu at the bottom of the left side
Only a single scenario will be expanded at a time. To expand another scenario, simply click on the scenario name on the left side.
Scenario settings screen
The scenario settings screen allows you to set configurations that are applied to all requests within the scenario such as "Default Address", "Variables" and "Headers". Furthermore, you can select the location that the scenario will be executed from. Simply click on the scenario name ("My Scenario" in this case) to display the scenario settings screen.
Select a location / Run tests from private locations
On the scenario settings screen you can select a location to run the test from. For fast execution, simply don't select a location or select the "Functional Testing Default Location". You can also run tests from private locations.
Default Address / base URL
On the scenario settings screen you can enter a default address that will be used for all the requests in the scenario. This allows you for example to easily switch from a QA to a staging environment by changing a single URL (e.g. from http://qa.demoblaze.com to http://staging.demoblaze.com).
Once you've entered the default address you only need to enter the endpoint in your requests.
Data driven testing / parametrize tests / set base URL dynamically
On the scenario settings screen you can upload a CSV file with data that you can use in your test.
The first row of your CSV file will be used as variable names.
You can then use these variables anywhere in your test.
Another way to use data from a CSV file is to parametrize your test - e.g. you can set the base URL dynamically. For example you could define a base URL in your CSV file and use that as a default address (on scenario settings screen) in your test.
Scenario level variables / headers
On the scenario settings screen you can define variables and headers that will be applied to all requests in the scenario. For example if you have a username that you want to use for multiple requests or a header that applies to all requests.
Common Functions / Helper Functions
At the top right you'll find a dropdown with some helper functions that will make it easier to create certain requests.
Templates / Authentication
Templates are a great way to make creating tests faster and easier. Templates can be added to your scenario by navigating to the scenario settings screen and selecting a template from the drop-down. There are templates for different types of authentication (Basic, Digest, OAuth), for SOAP API calls and many more. Also if you have suggestions for additional templates please let us know by submitting a feature request.
Write test script in BlazeMeter UI
If you prefer to code, then simply switch between the UI and the editor and write test scripts in YAML syntax. You can write your Taurus YAML test script directly in BlazeMeter. Simply switch from the UI to the editor using the toggle
Any changes that you will make in the editor will also show up in the UI and vice versa.
When switching from the UI to the editor, you will see a Taurus YAML representation of the scenario(s) that you've created in the UI. You can edit the Taurus YAML script and also switch back to the UI. When editing the script, a basic YAML syntax check is running in the background. If the script is invalid you won't be able to switch back to the UI (the toggle will be greyed out).
To make sure that your script is a valid Taurus YAML, click "Validate" (or "Revalidate" if using it again) in the below the editor:
Snippets in the editor are the same as templates in the UI.
Files can be used in different ways in your test scenario. You can use a CSV file as a data source, provide a file for a request body, use it as a form attachment, or run pre- and post-processing scripts. Click the upload file button and select a file from your computer:
To use a file in your test scenario, first open the drop-down and click the copy button:
Then paste the file name in your script.
An example of using a file as a data source:
data-sources: # list of external data sources - path: user-data.csv # this is full form, path option is required variable-names: first,second # delimiter-separated list of variable names, empty by default
Note: When using a CSV file, only the first row of data will be used when running an API Functional Test.)
An example of using a file in a request body:
body-file: path/to/file.txt # this file contents will be used as post body
An example of using a file as a form attachment:
upload-files: # attach files to form (and enable multipart/form-data) - param: summaryReport # form parameter name path: report.pdf # path to file mime-type: application/pdf # optional, Taurus will attempt to guess it automatically
An example of using files as a pre- or post-processing scripts:
- execute: before
- execute: after
Create test from API specification / Import Swagger file
Recording API Functional Tests
Finally, an even simpler way to getting started with creating a test is using BlazeMeter's Chrome extension
API Functional Test reports
Once you run the test you will get detailed results in the API Functional Test report
The dashboard above shows how many responses passed and failed, how many errors were identified and how many assertions failed, if any. Each request is, in fact, a 'test', in this context.
Click the toggle to filter the response to only see failed requests:
To the right, you will find links to test artifact and log files:
Under 'Test Case', you will see the script's scenarios (if run on JMeter, then scenarios correlate to Thread Groups). By opening them you will see the requests they contain.
Each 'test' is a label in the test plan. When you open it, you will see a summary of this label's response time, latency, request and response sizes and the response code and message:
Let's look at the request and response of the following test.
The request data:
Looking into the request for this test, you can see the body (if you sent a body with the request), the headers and the cookies:
The response data:
Looking into the response to this test, you can see the response body, headers, and whether your assertion passed or failed (and why it failed):
Report History view
In the "History" view you can see an overview of past test runs including which passed and failed and how many requests passed and failed. Click on a row or on one of the bars to the right to see the details of an individual test run: