Using Test Data With Service Virtualization

Introduction

Mock Services often reference data parameters, such as user names, properties, ids, or numeric values. You can either hard-code these values -- or parameterize the Mock Service with different variable values. You can associate transactional Mock Services with Service Data Entities that can contain the following sources of test data:

Transactional Mock Services can also share their Service Data Model with GUI Functional tests and Performance tests that have the Mock Service associated with them.

Tip: Using test data is similar to using Mock Service Configurations. Use test data if you want to reuse the same values at Service level or in Performance and Functional tests. Use Mock Service configurations to provide common values at the Mock Service's workspace level if you don't have the requirement to share them with tests.

Note: By themselves, Mock Services are not stateful, this means when a test POSTs data, do not expect the test data to change; attached CSV file are not altered during the test, either.
To make Mock Services stateful, add Processing Actions.
For more information about posting or deleting stateful data in the test environment in bulk, see Test Data Orchestration.

Example Use Cases

For example, as a tester, you want to use test data in your GUI Functional or Performance tests. If these tests also rely on Mock Services, the test cases may expect the same data values in the Mock Service's responses. You wouldn’t want to hard-code all possible responses as Mock Service transactions. By parameterizing the request matcher and response, you can ensure that the data returned by the Mock Service (the Service Data Model) is consistent with the data that drives the test (the Test Data Model):

  • Use Case #1 - Stand-alone Mock Service: Support for test data is beneficial in the stand-alone case where the Mock Service is not directly related to a test, but returns responses in a look-up approach driven by CSV files. The service data can be statically provided or synthetically generated.
  • Use Case #2 - Mock Service as part of a test: When a Mock Service is associated with a test, its service data model becomes available in the test's Test Data pane.

Another use case that you may want to simulate is that a certain percentage of the service responses are valid, or invalid, or null values. You can control this percentage by using generator functions such as randFromList(), randlov(), and randFromSeedlist().

For more general information on how to provide or generate test data, see How to Use Test Data.


Features

  • Parameterize the request matcher and the response with dynamic test data loaded from data entities. Data-driven matching is supported only for certain request matchers, such as "equals" (see below for the full list).
  • Define which data models to use for a particular configuration of Mock Service.
  • Define data models and data size for Mock Services, and also for Mock Service templates.
  • Preview and download test data.

This article covers the following topics:

How to Create Data-Driven Mock Services

You can use a data-driven transactional mock services either stand-alone, or together with BlazeMeter tests by associating the Mock Service with Performance or GUI Functional tests.

  1. Open or create a Mock Service with transactions
  2. Add Service Data to the transactions
  3. Configure Data Settings
  4. Run the Mock Service
  5. (Optional) Add Data-Driven Mock Services to a Tests

To ensure consistency between both models, either use the parameters from the Service Data Model directly in the test, or bind parameters between the Test Data Model and Service Data Model.

After you change the associated data, always redeploy the Mock Service to get the latest data. When you reconfigure a Mock Service, you can choose to regenerate the service data or keep the current data.

Add Service Data to Transactions

The Mock Services' equivalent of the Test Data pane is the Service Data pane. A Service Data Model contains one or more Data Entities. You can create several data entities, each of which can contain, for example, CSV files or other data parameters.

You will use one of the request parameters as identifier.

To add Service Data to transactions:

  1. Go to the Service Virtualization tab.
  2. Open Asset Catalog > Transactions.
  3. Click Service Data. The Service Data pane opens on the right side.
  4. Select a service.
  5. Click the Ellipsis button and click Add Data Entity.
  6. Parameterize the transaction by defining data parameters:
    1. Click the Plus button next to the data entity and Create a New Data Parameter, or attach a CSV File.
      For more information about defining data parameters, see How to Use Test Data.
    2. Click Copy parameter name to clipboard on a parameter that is used as identifier.
    3. Replace a hard-coded identifier value in the Request Matcher and in the Response with the pasted parameter name.
    4. (Optional) Copy and paste additional parameters as needed.
  7. Run the Mock Service.

If this is a stand-alone Mock Service, you are ready to run and use it now. If this Mock Service will be associated with a GUI Functional or Performance test, add the data-driven Mock Service to a test now.

Videos

The following video shows how to create and run a data-driven mock service in 2 minutes.

The following video shows how to drive a mock service by data loaded from a CSV file:

Example

In this example, let's assume your test data is an attached CSV file with the following content:

id,username
1,Jack
2,Jill

On the Request Matcher tab, you replace a hard-coded identifier "1234" by pasting the copied parameter ${id}:
So in the Request URL, you replace

GET /accounts/1234

by

GET /accounts/${id}

On the Response tab, you replace the hard-coded identifier "1234" by pasting the copied parameter ${id}. In addition, you choose to replace the hard-coded value "John Doe" by pasting the copied parameter ${username}:
So in the Response Body, you replace

{ 
"accountId" : "1234"
"name" : "John Doe"
}

by

{ 
"accountId" : "${id}"
"name" : "${username}"
}

Note that request and response share an identifier, here, the data parameter ${id}.

The request "GET /accounts/1" will always return "name":"Jack". The request "GET /accounts/2" will always return "name":"Jill". A request "GET /accounts/3" will not match because 3 is not in the data set; it will not return any random value nor loop over the list.

Configure Data Settings

On the Service Virtualization tab, click Mock Services. Open a Mock Service and go to its Data Settings tab. Data Settings are the same as for other test types; for more information, see How to Control the Number of Rows Used - Test Data Settings.

If a CSV file is attached, BlazeMeter uses all rows by default. If only synthetic data is defined, it generates one row of synthetic data by default. You can either define the number of data rows used as a number, or as the number of rows from CSV files -- just as for the other test types.

You can run several Mock Services with different Data Settings: One could be using ten rows of data, and another a thousand from the same data sources.

Run Options:

  • Defined Number of Rows.
  • All Rows From this CSV or F&R Data Model
  • All Rows From the longest CSV or F&R Data Model
  • All Rows From the shortest CSV or F&R Data Model

In the Service Data pane of a transaction, you can click Data Settings to view the same Run Options. On transaction level, you can provide default Data Settings; however, the Mock Service's Data Settings take precedence.

In addition, this Data Settings pane displays your target options.

Target options:

  • Delimiter
    Indicates the separator character between columns, by default, comma. Multi-character delimiters (such as \t) in JMX files are supported.
  • Ignore first line
    If enabled, indicates that the first line of the CSV file contains headers. If disabled, then the first line contains data.
  • Allow Quoted Data
    Indicates whether the CSV data can contain cells that contain the delimiter. These cells must be wrapped in quotation marks.

Add Data-Driven Mock Services to Tests

If the Mock Service is to be part of a GUI Functional or Performance test, override the parameters in the Test Data Model with values from the Service Data Model.

To add a data-driven Mock Service to a test, follow these steps:

  1. Open a Performance or GUI Functional test.
  2. Scroll down in the Test Configuration to the Mock Service Configuration section.
  3. Click the blue Plus button and add a Mock Service to the test.
  4. Open the Test Data pane and scroll down to the Service Data Model.
    The Service Data Model from the Mock Service and the Test Data Model from the test appear one after the other in the Test Data pane.
  5. For each parameter, click Copy parameter name to clipboard and replace a hard-coded value.

BlazeMeter does not automatically assume that two values with the same name in the two Data Models are references to each other. You need to declare the identity manually.

For example, in a GUI Functional test, you have defined the parameters id and email in a Test Data Model named Users. You want to reference the same values in the Mock Service Data Model. So you edit the Service Data Model and set the id value to ${Users.id}, and the email value to ${Users.email}.

Preview and Download Test Data

BlazeMeter makes it easy to inspect which data is being used in context, so you don't waste time debugging your test and its test data.

  1. On the Service Virtualization tab, click Mock Services.
  2. Open a running Mock Service and go to its Data Settings tab.

Preview

The Data Settings tab displays a preview of the current test data for this Mock Service, so you see exactly which data is being used.

  • If the Mock Service is a draft, has data assigned, and the data is used in transactions, you can preview it.
  • If the Mock Service is deployed, not a draft, and has data assigned, then the preview shows the current test data used in this mock service.
  • If you made changes to the data after deployment, you'll be notified to apply the changes to preview the latest data.

Download

The Data Settings tab is also where you can download the previewed data. Click the Download button above the preview to save a copy of the current test data as a CSV file to your local machine.

If you ever need to expand this test, all you need to do is download the CSV file, add more rows or add additional columns, and re-attach it to the test. Synthetic data expands automatically and generate as many rows of data as you have requested from the CSV.

Share Test Data

Data models are shared within a Workspace so you can use the same data model for GUI Functional tests, Performance tests, and Mock Services.

For more information, see How to Share Test Data.

Supported Request Matchers

Data-driven matching is supported only for the following request matchers:

  • Equals
  • Equals ignore case
  • Equals to json
    Accepts parameters only in the value, not as the key.
  • Matches json
  • Equals to xml
    Accepts parameters only in the node value, not as the node name.
  • Matches xml
  • Url equals

For query parameters, headers, and cookies, you can use data parameters only in the value fields.
You can use several parameters in a single value, such as “order created ${date} at ${time}”.

Limits

The maximum size of the test data that you can use with Mock Services is limited. The limit for the data file is 100MB. You can estimate the size of your data file by multiplying the number of columns times the number of rows times the size of the cell content (for example, the size of the string property).

A second limit is the time for deployment. Each part of the deploy process has a 5-minute time limit. If any step in the deployment exceeds this time limit, the whole deployment fails. The deployment has two parts:

  • The total time needed for the data generation: This part depends on the number of rows and the number of generated columns. Even if values are static, the generation of the data takes time.
  • The Mock Service startup time: The Mock Service needs to download and preprocess the generated data. The bigger the file, the longer it takes. If the Mock Service is not ready within the time limit, the whole deploy is marked as failed.

Example:

I have created a data model with 10 columns (10 properties). Eight properties are synthetically generated, and two are static strings.

I deploy the Mock Service with various numbers of data rows:

  • 50k rows; deploy time 2:25
  • 80k rows; deploy time 3:30; data file size: 47MB
  • 100k rows; deploy time 4:35; data file size: 59MB
  • 120k rows; deploy time 5:30; data file size: 71MB
  • 140k rows; deploy time 6:40; data file size: 82MB
  • 160k rows; deploy time: deploy failed; data file size: 99MB

In this example, the 99MB test data file was generated within the size limits, but the last deployment failed because the deployment exceeded the time limit for the Mock Service startup.