BlazeMeter's API Functional Test allows you to test whether your APIs work as expected. Build entire test scenarios directly in BlazeMeter's UI. Use templates to build your tests faster. If you prefer code then simply switch between the UI and the editor and write test scripts in YAML syntax. And if you already have JMeter scripts or Taurus YAML scripts then simply upload and run them.
Create a new test of type "API Functional Test"
You can define your scenario directly in the UI...
... or upload an existing test script (JMeter, Taurus YAML)
Click the "Upload Test Scenario Files" button (top right corner)
Then click the blue plus button to select files from your computer or from a shared folder in BlazeMeter.
Alternatively you can write your Taurus YAML test script directly in BlazeMeter. Simply switch from the UI to the editor using the toggle
Any changes that you will make in the editor will also show up in the UI and vice versa.
And finally an even simpler way to create a test is using BlazeMeter's Chrome extension to record an API Functional test script.
Using the UI to create an API Functional Test
Click on "Request Name" and enter a description of what the API call does. This description will also show up in the report.
Enter a valid URL (or an endpoint if you are using the default address field in scenario configuration). Change the request in the drop down.
To use a value from a response in a subsequent request to to "Extract from response" tab. For example if your request creates a user and the response contains the user ID you can save that ID in a variable and use it in the next request to delete that user.
button in the variable name field to copy the variable name including the right syntax. Then simply paste it using cmd/ctrl + v in the next request.
The scenario settings screen allows you to set configurations that are applied to all requests within the scenario such as "Default Address", "Variables" and "Headers". Furthermore you can select the location that the scenario will be executed from. For fast execution simply don't select a location or select "Functional Testing Default Location".
Templates are a great way to make creating tests faster and easier. Templates can be added to your scenario by navigating to the scenario settings screen and selecting a template from the dropdown.
An API Functional Test can contain multiple scenarios.
Only a single scenario will be expanded at the same time. To expand another scenario click the triangle next to the scenario name.
When switching from the UI to the editor you will see a Taurus YAML representation of the scenario(s) that you've created in the UI. You can edit the Taurus YAML script and also switch back to the UI. When editing the script a basic YAML syntax check is running in the background. If the script is invalid you won't be able to switch back to the UI (toggle will be greyed out).
To make sure that your script is valid Taurus YAML click "validate" (or "Revalidate" if using it again) below the editor
Snippets in the editor are the same as templates in the UI.
Files can be used in different ways in your test scenario. You can use a csv file as a data source, provide a file for a request body, use it as a form attachment or run pre- and post-processing scripts. Click the upload file button and select a file from your computer.
To use a file in your test scenario first open the drop down and click the copy button
Then paste the file name in your script.
Example of using a file as a data source
data-sources: # list of external data sources - path: user-data.csv # this is full form, path option is required variable-names: first,second # delimiter-separated list of variable names, empty by default
Note: When using a csv file currently only the first row of data will be used when running an API Functional Test.
Example for using a file in a request body
body-file: path/to/file.txt # this file contents will be used as post body
Example for using a file as a form attachment
upload-files: # attach files to form (and enable multipart/form-data) - param: summaryReport # form parameter name path: report.pdf # path to file mime-type: application/pdf # optional, Taurus will attempt to guess it automatically
Example for using files as a pre- or post-processing scripts
jsr223: - execute: before language: beanshell parameters: null script-file: username_password_create.bsh - execute: after language: beanshell parameters: null script-file: write_users_to_CSV.bsh
Key-value pairs in the request body of a PUT request are ignored when running a test using JMeter as the engine. This is a JMeter bug. Once JMeter releases a fix we will be able to resolve this issue as well. You can see the status of the JMeter bug here https://bz.apache.org/bugzilla/show_bug.cgi?id=43612 . Workaround: In the body tab toggle to "Text" and define your key-value pairs there in JSON format.
API Functional Test reports
Once you run the test you will get detailed results in the API Functional Test report
The dashboard above shows how many responses passed and failed, how many errors were identified and how many assertions failed, if any. Each request is, in fact, a 'test', in this context.
Click the toggle to filter the response to only see failed requests
To the right you will find links to test artifact and log files.
Under 'Test Case', you will see the script's scenarios (if run on JMeter then scenarios correlate to Thread Groups). By opening them you will see the requests they contain.
Each 'test' is a label in the test plan. When you open it, you will see a summary of this label's response time, latency, request and response sizes and the response code and message.
Let's look at the request and response of the following test.
The request data:
Looking into the request of this test, you can see the body (if you sent a body with the request), the headers and the cookies.
The response data:
Looking into the response of this test, you can see the response body, headers and you can see whether your assertion passed or failed (and why it failed):
Report History view
In the "History" view you can see an overview of past test runs including which passed and failed and how many requests passed and failed. Click on a row or on one of the bars to the right to see the details of an individual test run.