Verifying your script runs properly in BlazeMeter

Creating and running load tests are not tasks that come naturally. They require knowledge of testing and a profound understanding of the target app.

The ideal scenario for testing is to create a script, validate it, upload it to BlazeMeter, run it and quickly generate beautiful reports, and then analyze them. However there are a lot of variables along the way, and while the process seems simple, it requires patience and usually some debugging in order to get it right.

Here are the main steps from creating the test to debugging your BlazeMeter tests and finally getting your desired results.

Creating a test

Please refer to this article. 

Running a Sandbox test in BlazeMeter

The Sandbox configuration allows you to test your script in BlazeMeter's cloud and ensure everything works well.

Here are some common issues you might come across:

  1. Firewall - make sure your environment is open to the BlazeMeter CIDR list and whitelist them.
  2. Make sure all of your test files e.g: CSVs, JAR, etc. are present.
  3. Make sure you didn't use any paths to the files, e.g c:\files\file.csv

For more info on Sandbox tests please refer to this article

Calibrating the BlazeMeter test

Now that we're sure the script runs flawlessly in BlazeMeter, we need to figure out how many users we can load using one load generator.

Set your test configuration to:

  • Number of threads: 500
  • Ramp-up: 2400 seconds
  • Iteration: forever
  • Duration: 50 minutes
  • Use 1 load generator (engine).

Run the test and monitor your test's engine via the Monitoring Report.

If your engine didn't reach either a 75% CPU utilization or 85% memory usage (one-time peaks can be ignored):

  • Change the amount of threads to 700 and run the test again
  • Raise the amount of threads until you get  to 1,000 threads or 60% CPU

If your engine passed the 75% CPU utilization or 85% Memory usage (one-time peaks can be ignored):

  • Look at when your test first got to 75% - and see how many users you had at this point.
  • Run the test again. This time, enter the amount of users you got from the previous test.
  • Set the ramp-up time you want for the real test (5-15 minutes is a great start) and set the duration to 50 minutes.
  • Make sure you don't go over 75% CPU or 85% memory usage throughout the test

You can also go the safer route and decrease 10% of the threads per load generator.

Looking for a more detailed guide for test calibration? click here.

Debugging the test according to results

Reports are being generated in real time from the first second of a test session.
In the report, you will find almost all the information you require for debugging your test run.

  • Errors report - provides a list of all errors received during the test. You will get a very good sense of what went wrong, e.g if you'll get a lot of 403 Forbidden errors, chances are that your firewall is blocking the traffic from our load generators and you will have to whitelist our IP ranges according to this list.
  • Monitoring Report - displays the performance indicators received during the test while monitoring the load generators. High CPU or Memory values will indicate the scenario is overwhelming the load generators and adjustments have to be made.
  • Logs - these mostly refer to the console's log and the load generators (engines) logs. You might be able to notice errors printed to the logs, e.g CSV file not found, Java errors, beanshell errors etc. The logs also present more details on the health of the load generators so the Monitoring Report's originated suspicions can be validated.
  • Aggregate report - enables you to view the statistics according to labels. You may notice that some labels generates a lot of errors, or get high response times then the rest of them.
  • Drill down to the data level:
  1. Drill down to the Report's JTL file. a JTL is the file which contains all the results of a session. The most convenient way to go examine the JTL is to open it in JMeter's 'View Results Tree' listener.
  2. In some cases you will want to know data from the actual responses which is not included in the JTL. The best practice is to include in the script a JMeter listener called Simple Data Writer. It enables you to write to an additional log file details like response data, response headers, idle time etc.

Running a full test session

After debugging the test, cleaning up all the errors and stress points, you should be good to go running the max load for its full duration.
At this point, your scenario should be well optimized so you can run a load test with confidence. Large scale tests or lengthy Soak tests will run just as well.

Want to learn more? Watch our on-demand recording, How to Make JMeter Highly Scalable and More Collaborative With BlazeMeter

Have more questions? Submit a request


Please sign in to leave a comment.