Creating and running load tests does not come naturally. These tasks require knowledge of testing and a profound understanding of the target app.
The ideal scenario for testing is to create a script, validate it, upload it to BlazeMeter, run it, and quickly generate beautiful reports, and then analyze them. However there are a lot of variables along the way, and while the process seems simple, it requires patience and usually some debugging in order to get it right.
Here are the main steps, from creating the test to debugging your BlazeMeter tests, and finally getting your desired results.
- Creating a test
- Running a Debug test in BlazeMeter
- Calibrating the BlazeMeter test
- Debugging the test according to results
- Running a full test session
Creating a test
For more information, see Creating a Performance Test.
Running a Debug test in BlazeMeter
The Sandbox configuration allows you to test your script in BlazeMeter's cloud and ensures everything works well.
Here are some common issues you might come across:
- Firewall - make sure your environment is open to the BlazeMeter CIDR list and whitelist them.
- Make sure all of your test files (for example CSV, JAR, User.properties files) are present.
- Make sure you didn't use any local paths to the files, such as c:\folder\file.csv
For more information on Debug tests, see Debug Test: Low-Scale Test Run and Enhanced Logging.
Calibrating the BlazeMeter test
Now that we're sure the script runs flawlessly in BlazeMeter, we need to figure out how many users we can load using one load generator.
Set your test configuration to:
- Number of threads: 500
- Ramp-up: 2400 seconds
- Iteration: forever
- Duration: 50 minutes
- Use 1 load generator (engine).
Run the test and monitor your test's engine via the Monitoring Report.
If your engine didn't reach either a 75% CPU utilization or 85% memory usage (one-time peaks can be ignored), then:
- Change the number of threads to 700 and run the test again
- Raise the number of threads until you get to 1,000 threads or 60% CPU
If your engine passed the 75% CPU utilization or 85% Memory usage (one-time peaks can be ignored), then:
- Look at when your test first got to 75% - and see how many users you had at this point.
- Run the test again. This time, enter the number of users you got from the previous test.
- Set the ramp-up time you want for the real test (5-15 minutes is a great start) and set the duration to 50 minutes.
- Make sure you don't go over 75% CPU or 85% memory usage throughout the test
You can also go the safer route and decrease 10% of the threads per load generator.
For a more detailed guide on test calibration, see Calibrating a JMeter Test.
Debugging the test according to results
Reports are being generated in real time from the first second of a test session.
In the report, you will find almost all the information you require for debugging your test run.
- Errors report - provides a list of all errors received during the test. You will get a very good sense of what went wrong, for example, if you get a lot of 403 Forbidden errors, chances are that your firewall is blocking the traffic from our load generators, and you will have to whitelist the IP ranges that are detailed in this list.
- Engine Health Report - displays the performance indicators received during the test while monitoring the load generators. High CPU or Memory values will indicate the scenario is overwhelming the load generators and adjustments have to be made.
- Logs - these mostly refer to the console's log and the load generators (engines) logs. You might be able to notice errors printed to the logs, such as CSV file not found, Java errors, beanshell errors etc. The logs also present more details on the health of the load generators so the Monitoring Report's originated suspicions can be validated.
- Aggregate report - enables you to view the statistics according to labels. You may notice that some labels generate a lot of errors, or get high response times than the rest of them.
- Drill down to the data level:
- Drill down to the Report's JTL file. A JTL is a file which contains all the results of a session. The most convenient way to go examine the JTL is to open it in JMeter's 'View Results Tree' listener.
- In some cases, you will want to know data from the actual responses which is not included in the JTL. The best practice is to include in the script a JMeter listener called Simple Data Writer. It enables you to write to an additional log file details like response data, response headers, idle time etc.
Running a full test session
After debugging the test, cleaning up all the errors and stress points, you should be good to go running the max load for its full duration.
At this point, your scenario should be well optimized so you can run a load test with confidence. Large-scale tests or lengthy Soak tests will run just as well.
Want to learn more? Watch our on-demand recording, How to Make JMeter Highly Scalable and More Collaborative With BlazeMeter
0 Comments