How to Integrate Mock Services and Test Data with LoadRunner Tests

Read this practical guide to learn how to use BlazeMeter Mock Services in LoadRunner and how to generate synthetic test data on-demand for use in LoadRunner. For an overview of Mock Services in BlazeMeter, see How to Use BlazeMeter Mock Services for Testing (Blog).

How to Use BlazeMeter Mock Services in LoadRunner

Other than actually running the test from LoadRunner Pro, there are merely three things you’ll need to take care of:

  1. Create the Mock Services in BlazeMeter. For more information, see Creating Your First Mock Service.

  2. In LoadRunner, in the Init Action step, spin up the Mock Services.

  3. In LoadRunner, in the End Action step, stop the Mock Services.

  4. Point the app-under-test or test script to the Mock Service.

    You are running your LoadRunner test while utilizing BlazeMeter Service Virtualization to mock your dependencies.

 

LoadRunner Sample Scripts

In the sample script below:

 

To start the service:

vuser_init()
{
  int i=0;
  web_add_header("Authorization", "Basic {auth token}");
  web_reg_save_param_ex(
    "ParamName=trackingUrl",
    "LB=\"trackingUrl\" : \"",
    "RB=\"",
    SEARCH_FILTERS,
    LAST);
	
  web_custom_request("start service",
"URL=https://mock.blazemeter.com/api/v1/workspaces/<workspaceId>/service-mocks/<mockId>/deploy", 
    "Method=GET",  
    "RecContentType=application/json",
    "EncType=application/json",
    LAST);
	
  lr_log_message("Tracking URL is:\n%s\n", lr_eval_string("{trackingUrl}"));
 
  do{
    web_reg_save_param_ex(
    	"ParamName=status",
    	"LB=\"status\" : \"",
    	"RB=\"",
    	SEARCH_FILTERS,
    	LAST);

    web_add_header("Authorization", "Basic {auth token}");

    web_custom_request("check status of service start", 
	"URL=https://mock.blazemeter.com{trackingUrl}", 
	"Method=GET",  
	"RecContentType=application/json",
	"EncType=application/json",
	LAST);

    lr_log_message("Status is:\n%s\n", lr_eval_string("{status}"));

    sleep(1000);
		
    i++;
  } while (strcmp(lr_eval_string("{status}"), "FINISHED") != 0);

  sleep(30000);
  return 0;
}

 

To stop the service:

vuser_end()
{
  int i=0;
  web_add_header("Authorization","Basic {auth token}");
	
  web_reg_save_param_ex(
    "ParamName=trackingUrl",
    "LB=\"trackingUrl\" : \"",
    "RB=\"",
    SEARCH_FILTERS,
    LAST);

  web_custom_request(
    "stop service", 
    "URL=https://mock.blazemeter.com/api/v1/workspaces/<workspaceId>/service-mocks/<mockId>/stop", 
    "Method=GET",  
    "RecContentType=application/json",
    "EncType=application/json",
    LAST);

  lr_log_message("Tracking URL is:\n%s\n", lr_eval_string("{trackingUrl}"));

  do{
    web_reg_save_param_ex(
    	"ParamName=status",
    	"LB=\"status\" : \"",
    	"RB=\"",
    	SEARCH_FILTERS,
    	LAST);

    web_add_header("Authorization", "Basic {auth token}");

    web_custom_request(
      "check status of service stop", 
      "URL=https://mock.blazemeter.com{trackingUrl}", 
      "Method=GET",  
      "RecContentType=application/json",
      "EncType=application/json",
      LAST);

    lr_log_message("Status is:\n%s\n", lr_eval_string("{status}"));

    sleep(1000);
	
    i++;
  } while (strcmp(lr_eval_string("{status}"), "FINISHED") != 0);
	
  return 0;
}

Next, let’s look at the Test Data integration.

How to Generate a Data Set On-demand With Test Data for LoadRunner

  1. Create your shared data model in BlazeMeter. For more information, see How to Use Test Data and How to Share Test Data.
  2. Go to the Test Data tab and copy the data model ID in BlazeMeter.
  3. Create a scenario in the LoadRunner Controller and create two groups.
    • The first group is the "starter" script that fetches the generated data from BlazeMeter.
    • The second group is the script that will make use of the generated data. 
  4. To set up the "starter" test, add the following script in your VuGen Init Action.
    1. Replace {Your_BASE64_API-key_pair} with a Base 64 encoding of your BlazeMeter API key id:secret pair.
    2. Replace {Your_Data_Model_ID} with your data model ID and the {workspaceId} which the respective BlazeMeter workspace ID it resides in.
      long File;
      // Set this to a local path to store the generated data file.
      char FileLocation[1024] = "C:\\folder\\your_folder\\Scripts\\Your_Scenario's_Name\\data.csv"; 
      
      vuser_init()
      {
        //set the Basic Authorization header with your base64 encoded apiKeyId:apiKeySecret
        web_add_header("Authorization","{Your_BASE64_API-key_pair}");
        web_add_header("Content-Disposition", "attachment");
        web_add_header("Accept", "text/csv");
        web_reg_save_param_ex(
          "ParamName=bodyparam",
          "LB=",
          "RB=",
          SEARCH_FILTERS,
          "Scope=Body",
          LAST);
      
      
        //Define your test data endpoint by setting the workspace and the data model ID.
        web_custom_request("generate data", 
      	"URL=https://tdm.blazemeter.com/api/v1/workspaces/{workspaceId}/datamodels/{Your_Data_Model_ID}/generatefile?entity=default", 
      	"Method=POST",  
      	"RecContentType=application/json",
      	"EncType=application/json", 
      	LAST);
      	
        lr_log_message("JSON response is:\n%s\n", lr_eval_string("{bodyparam}"));
      	
        lr_log_message("\n\nWriting JSON to CSV file\n\n");
        File = fopen (FileLocation,"w+");
        fprintf (File, "%s", lr_eval_string("{bodyparam}")); 
        fclose (File);
        return 0;
      }
      
  5. Run the Init and view the data set values in the log viewer in VuGen.

  6. Parameterize your second script (your actual test) with variables that you pull from the CSV file. The "starter" step of the Init Action downloads the CSV file from BlazeMeter.

  7. Set the scenario in the LoadRunner Controller to schedule the test by groups (one group per script), and set the second group to run when the first group finishes.
    This way the CSV data is updated before the second script starts.

You’ve successfully set up a test in LoadRunner while utilizing BlazeMeter synthetic data generator to incorporate dynamic test data in your performance test.