BlazeMeter Mock Services in LoadRunner - A Practical Guide
Other than actually running the test from Loadrunner Pro, there are merely 3 things you’ll need to take care of:
1. Create the Mock Services in BlazeMeter. Here’s a detailed guide to follow.
2. In LoadRunner, in the Init Action step, you will have to spin up the mock service. When the test is complete, stop the mock services in the ‘End Action’ step. In the script below, replace ‘workspaceId’ with the workspace ID your Mock Services resides in, same goes for the Mock Id, and replace {auth token} with a Base 64 encoding of your BlazeMeter API key id:secret pair.
Start the service:
vuser_init() { int i=0; web_add_header("Authorization", "Basic {auth token}"); web_reg_save_param_ex( "ParamName=trackingUrl", "LB=\"trackingUrl\" : \"", "RB=\"", SEARCH_FILTERS, LAST); web_custom_request("start service", "URL=https://mock.blazemeter.com/api/v1/workspaces/<workspaceId>/service-mocks/<mockId>/deploy", "Method=GET", "RecContentType=application/json", "EncType=application/json", LAST); lr_log_message("Tracking URL is:\n%s\n", lr_eval_string("{trackingUrl}")); do{ web_reg_save_param_ex( "ParamName=status", "LB=\"status\" : \"", "RB=\"", SEARCH_FILTERS, LAST); web_add_header("Authorization", "Basic {auth token}"); web_custom_request("check status of service start", "URL=https://mock.blazemeter.com{trackingUrl}", "Method=GET", "RecContentType=application/json", "EncType=application/json", LAST); lr_log_message("Status is:\n%s\n", lr_eval_string("{status}")); sleep(1000); i++; } while (strcmp(lr_eval_string("{status}"), "FINISHED") != 0); sleep(30000); return 0; }
Stop the service:
vuser_end() { int i=0; web_add_header("Authorization", "Basic {auth token}"); web_reg_save_param_ex( "ParamName=trackingUrl", "LB=\"trackingUrl\" : \"", "RB=\"", SEARCH_FILTERS, LAST); web_custom_request("stop service", "URL=https://mock.blazemeter.com/api/v1/workspaces/<workspaceId>/service-mocks/<mockId>/stop", "Method=GET", "RecContentType=application/json", "EncType=application/json", LAST); lr_log_message("Tracking URL is:\n%s\n", lr_eval_string("{trackingUrl}")); do{ web_reg_save_param_ex( "ParamName=status", "LB=\"status\" : \"", "RB=\"", SEARCH_FILTERS, LAST); web_add_header("Authorization", "Basic {auth token}"); web_custom_request("check status of service stop", "URL=https://mock.blazemeter.com{trackingUrl}", "Method=GET", "RecContentType=application/json", "EncType=application/json", LAST); lr_log_message("Status is:\n%s\n", lr_eval_string("{status}")); sleep(1000); i++; } while (strcmp(lr_eval_string("{status}"), "FINISHED") != 0); return 0; }
3. Point the app-under-test or test script to the mock service.
That’s it. You’ve successfully run your LoadRunner test while utilizing BlazeMeter Mock services to mock your dependencies. Now let’s look at the Test Data integration.
How to Generate a Data Set On-demand With Test Data for LoadRunner
-
-
- Create your data model in BlazeMeter according to the steps specified in the How to Share Test Data article.
- In order to use this integration, you'll need to create a scenario in the LoadRunner Controller and create two groups. The first group is the "starter" script that fetches the data from BlazeMeter, and the second group is the script that will make use of the data.
- Let's set up the "starter" test. In your VuGen Init Action, add the following script while:
- Replace {Your_BASE64_API-key_pair} with a Base 64 encoding of your BlazeMeter API key id:secret pair.
- Replace {Your_Data_Model_ID} with your own data model’s Id and {workspaceId} which the respective BlazeMeter workspace ID it resides in
-
- How to find your Data Model ID? In BlazeMeter, go to "Functional" (on the top navigation bar) -> "Create test" -> "GUI Functional Test" - > "Test Data". Now, please open your browser's devtools in the Network tab, clean it up, and then click on the Ellipsis menu and select "manage". In the request called "datamodels?view_as=..." the response will include the Data Model ID you're looking for.
long File; char FileLocation[1024] = "C:\\folder\\your_folder\\Scripts\\Your_Scenario's_Name\\data.csv"; // Set to your local data file. vuser_init() { // set the Basic Authorization header with your base64 encoded apiKeyId:apiKeySecret web_add_header("Authorization", "{Your_BASE64_API-key_pair}"); web_add_header("Content-Disposition", "attachment"); web_add_header("Accept", "text/csv"); web_reg_save_param_ex( "ParamName=bodyparam", "LB=", "RB=", SEARCH_FILTERS, "Scope=Body", LAST); // Set your Test Data endpoint by changing the workspace and the datamodel ID. web_custom_request("generate data", "URL=https://tdm.blazemeter.com/api/v1/workspaces/{workspaceId}/datamodels/{Your_Data_Model_ID}/generatefile?entity=default", "Method=POST", "RecContentType=application/json", "EncType=application/json", LAST); lr_log_message("JSON response is:\n%s\n", lr_eval_string("{bodyparam}")); lr_log_message("\n\nWriting JSON to CSV\n\n"); File = fopen (FileLocation,"w+"); fprintf (File, "%s", lr_eval_string("{bodyparam}")); fclose (File); return 0; }
Running the Init will result in seeing the data set values in the log viewer in VuGen.
4. Make sure to configure your second script, i.e. your actual test, with parameters pulling from the CSV file that the "starter" step is going to replace.
5. Set the scenario in the Loadrunner Controller to schedule the test by groups (1 group per script), and set the second group to run when the first group finishes. This way the data is updated before the second script starts.
You’ve successfully set up a test in LoadRunner while utilizing BlazeMeter synthetic data generator to incorporate dynamic test data in your performance test.
Please contact us if you have any questions on this integration. We'll be happy to help.
integrations@blazemeter.com
0 Comments