You may find that when you run a JMeter test on Blazemeter with a low number of threads (users), the test will execute successfully, but once you begin scaling up to a higher load, the test may fail or return unexpected errors. This is often a sign that your test is in need of calibration, which must be performed to ensure a test will reliably run at higher loads.
With the exception of small, low-scale tests, all tests should be calibrated properly so as to prevent overloading engine CPU or memory.
This article will show you the steps you should take to properly run the test calibration process.
Using a Taurus YAML with your test? If so, then please follow our Calibrating a Taurus Test guide instead.
Step 1: Create Your Script
There are two ways to create your script:
- Use the Blazemeter Proxy Recorder or Blazemeter Chrome Extension to record your script.
- Go manually all-the-way and construct everything from scratch. This is more common for functional/QA tests.
If you generate a JMeter script from a recording, keep in mind that:
- You'll need to change certain parameters, such as Username & Password. You can also set a CSV file with those values so each user can be unique.
- You might need to extract elements such as Token-String, Form-Build-Id and others, by using Regular Expressions, the JSON Path Extractor, or the XPath Extractor. This will enable you to complete requests like "AddToCart", "Login" and more.
- You should keep your script parameterized and use configuration elements like HTTP Requests Defaults to make your life easier when switching between environments.
Step 2: Test Locally with JMeter
Start debugging your script with one thread, one iteration, and using the View Results Tree element, Debug Sampler, and Dummy Sampler. Keep the Log Viewer open in case any JMeter errors are reported.
Go over the True and False responses of all the scenarios to make sure the script is performing as you expected.
After the script has run successfully using one thread, raise it to 0-20 threads for ten minutes and check:
- Are the users coming up as unique (if this was your intention)?
- Are you getting any errors?
- If you're running a registration process, take a look at your backend. Are the accounts created according to your template? Are they unique?
- Check test statistics under "Cumulative Stats". Do they make sense (in terms of average times, hits)?
Once your script is ready:
- Clean it up by removing any Debug/Dummy Samplers and deleting your script listeners
- If you use Listeners (such as "Save Responses to a file") or a CSV Data Set Config, make sure you don't use any paths; use only the filename (as if it was in the same folder as your script).
- If you're using your own proprietary JAR file(s), upload them.
- If your script uses more than one thread group, be aware of how Blazemeter divides users among multiple thread groups, as detailed in our explanation of Total Users.
- If your script uses special thread groups, be aware of how Blazemeter handles such thread groups. Our guide of how Ultimate Thread Groups are handled provides these details.
Step 3: Run a Debug Test
Start with a Debug Test, which makes a logical copy of your test and runs it at a lower scale. The test will run with 10 threads and for a maximum of 5 minutes or 100 iterations, whichever occurs first.
This Debug configuration allows you to test your script and backend and ensure everything works well.
Here are some common issues you might come across:
- Your firewall may block Blazemeter's engines from reaching your application server. Review our guide on Load Testing Behind Your Corporate Firewall for some solutions.
- Make sure all of your test files (CSVs, JARs, JSON, user.properties, etc.) are uploaded to the test. (Refer to our guide on Uploading Files & Shared Folders for more details.)
- Make sure you didn't use any paths with your filenames.
WARNING! If you do not upload all files used by your test script, or if you do not remove local paths from your file references, the test may fail to start, and may hang indefinitely while Blazemeter searches for a file location that doesn't exist.
If you're still having trouble, look at the Errors Report and Logs Report for errors.
This will allow you to get enough data to analyze the results and ensure the script was executed as you expected.
You should also check the Engine Health Report to see how much memory & CPU was used, which is key to the next step.
Lastly, keep an eye out for common issues that may result in your test running locally but not on Blazemeter or your test not starting at all.
Step 4: Determine Users per Engine
Now that we're sure the script runs flawlessly in BlazeMeter, we need to figure out how many users we can apply to one engine.
Set your test configuration to:
- Number of threads: 500
- Ramp-up: 40 minutes
- Duration: 50 minutes
- Use 1 engine.
Run the test and monitor your test's engine via the Engine Health Report.
If your engine didn't reach either a 75% CPU utilization or 85% memory usage (one time peaks can be ignored):
- Change the number of threads to 700 and run the test again.
- Raise the number of threads until you get to 1,000 threads or 60% CPU.
If your engine passed the 75% CPU utilization or 85% Memory usage (one time peaks can be ignored):
- Look at when your test first got to 75% and see how many users you had at this point.
- Run the test again. This time, decrease the threads per engine by 10%.
- Set the ramp-up time you want for the real test (5-15 minutes is a great start) and set the duration to 50 minutes.
- Make sure you don't go over 75% CPU or 85% memory usage throughout the test
Note: An important detail to be aware of is that the user limit set for your subscription plan is not the same as the user limit one engine can handle, which will vary from one test script to the next. For example, if your plan allows for 1,000 users, that does not guarantee that all 1,000 users can execute on one engine.
Step 5: Configure Your Full Load Test
Once we know the script is working and we how many users each engine can sustain, we can finally configure the test to achieve our load testing goal.
Let’s assume these values (as an example):
- One engine can have 500 users
- We aim to test for 10K users
This means to achieve our goal, our test needs 20 engines (10,000 \ 500). Those 20 engines can either be all in one geographic location, or spread across multiple locations. (Refer to our Load Distribution guide for more details.)
Note: BlazeMeter uses a variety of cloud providers (AWS, Google Cloud, and Azure) which have different types of machines and network infrastructures, so if your test runs engines from more than one cloud provider, it's recommended to ensure that each provider's engine can sustain the load and bring about the desired outcome.
Step 6: Use a Multi-Test for Multiple Scenarios
This step is optional and only applies if your test includes multiple scenarios, in which case you should setup your test as a Multi-Test. (For a detailed walkthrough, refer to our guide on Multi-Tests.)
- Create a new Multi-Test.
- Add each scenario (single test).
- You can change the configuration of each scenario (as detailed in the Modify the Scenarios section of our Multi-Test guide).
- Click "Run Test" to launch all of your scenarios (additional options are covered in the Run the Multi-Test section of our Multi-Test guide).
- The aggregated report of your Multi Test will start generating results in a few minutes, which also can be filtered for each individual scenario. (For details on how to use these filters, refer to our Reporting Selectors for Scenario and Location guide.)
That's it! Your test is up and running!
0 Comments