NOTE: This guide only applies to those still using our legacy UI. Please refer to our new Calibrating a JMeter Test guide which accounts for the changes in our new testing infrastructure. It's considerably easier now!
You may find that when you run a test with a low number of threads (users), the test will execute successfully, but once you begin scaling up to a higher load, the test may fail or return unexpected errors. This is often a sign that your test is in need of calibration, which must be performed to ensure a test will reliably run at higher loads.
This article will show you the steps you should take to properly run the test calibration process.
Using a Taurus YAML with your test? If so, then please follow our Calibrating a Taurus Test guide instead.
Quick Overview of the Steps
- Write your script
- Test it locally with JMeter
- Run BlazeMeter SandBox Testing
- Setup the amount of Users-per-Engine using 1 Console & 1 Engine
- (Legacy Tests Only) Setup and test your Cluster (1 Console & 10-14 Engines)
- Use the Multi Test feature to reach your max CC goal
Step 1: Write Your Script
There are various ways to get your script:
- Use the BlazeMeter proxy recorder.
- Use the JMeter HTTP(S) Test Script Recorder. This sets up a proxy so you could run your test through and record everything
- Go manually all-the-way and construct everything from scratch. This is more common for functionality/QA tests
If you get your script from a recording (as in steps 1 & 2), keep in mind that:
- You'll need to change certain parameters, such as Username & Password, or you might want to set a CSV file with those values so each user can be unique.
- You might need to extract elements such as Token-String, Form-Build-Id and others using Regular Expressions, JSON Path Extractor, XPath Extractor. This will enable you to complete requests like "AddToCart", "Login" and more...See this article regarding these procedures
- You should keep your script parameterized and use configuration elements like HTTP Requests Defaults to make your life easier when switching between environments.
Step 2: Testing Locally With JMeter
Start debugging your script with one thread, one iteration, and using the View Results Tree element, Debug Sampler, and Dummy Sampler. Keep the Log Viewer open in case any JMeter errors are reported.
Go over the True and False responses of all the scenarios to make sure the script is performing as you expected.
After the script has run successfully using one thread, raise it to 0-20 threads for ten minutes and check:
- Are the users coming up as unique (if this was your intention)?
- Are you getting any errors?
- If you're running a registration process, take look at your backend - are the accounts created according to your template? Are they unique?
- Test statistics on the summary report - do they make sense (in terms of average response time, errors, hits/s)?
Once your script is ready:
- Clean it up by removing any Debug/Dummy Samplers and deleting your script listeners
- If you use the Listeners (such as "Save Responses to a file"), make sure you don't use any Paths! If it's a Listener or a CSV Data Set Config make sure you don't use the path you've used locally, use only the filename instead (as if it was in the same folder as your script)
- If you're using your own proprietary JAR file(s), upload them.
- If you're using more than one Thread Group (or not the default one) - set the values before uploading them to BlazeMeter.
Step 3: BlazeMeter SandBox Testing
If that's your first test, take a look at this article on how to create tests in BlazeMeter.
The Sandbox is actually a test which has up to 20 users, uses only a console (0 engines) and runs for up to 20 minutes. The location chosen to load the traffic from should be 'Sandbox'.
The Sandbox configuration allows you to test your script, backend and ensure everything works well.
Here are some common issues you might come across:
- Firewall - make sure your environment is open to the BlazeMeter CIDR list and whitelist them
- Make sure all of your test files e.g: CSVs, JAR, JSON, User.properties etc. are present
- Make sure you didn't use any paths to the files
If you're still having trouble, look at the logs for errors (you should be able to download the entire log).
A SandBox configuration can be:
- Engines: Console only (1 console, 0 engines)
- Threads: 1 -20
- Ramp-up: 300-1200 seconds
- Iteration: continues forever
- Duration: 10-20 minutes
This will allow you to get enough data during your ramp-up period to analyze the results and ensure the script was executed as you expected.
You should also check the Monitoring Report to see how much memory & CPU was used. This should help you with step four.
Step 4: Setup The Amount Of Users-Per-Engine Using 1 Console & 1 Engine
Now that we're sure the script runs flawlessly in BlazeMeter, we need to figure out how many users we can apply to one engine.
Here is a way to figure this out without looking back on the SandBox test data.
Set your test configuration to:
- Number of threads: 500
- Ramp-up: 2400 seconds
- Iteration: forever
- Duration: 50 minutes
- Use 1 console and 1 engine.
Run the test and monitor your test's engine via the Monitoring Report.
If your engine didn't reach either a 75% CPU utilization or 85% memory usage (one time peaks can be ignored) :
- Change the number of threads to 700 and run the test again
- Raise the number of threads until you get to 1,000 threads or 60% CPU
If your engine passed the 75% CPU utilization or 85% Memory usage (one time peaks can be ignored):
- Look at when your test first got to 75% - and see how many users you had at this point.
- Run the test again. This time, instead of putting a ramp-up of 500, enter the number of users you got from the previous test
- Set the ramp-up time you want for the real test (300-900 seconds is a great start) and set the duration to 50 minutes.
- Make sure you don't go over 75% CPU or 85% memory usage throughout the test
You can also go safer and decrease 10% of the threads per engine.
Note: An important detail to be aware of is that the user limit set for your subscription plan is not the same as the user limit one engine can handle, which will vary from one test script to the next. For example, if your plan allows for 1,000 users, that does not guarantee that all 1,000 users can execute on one engine.
Multi Location Tests
BlazeMeter uses a variety of cloud providers which have different types of machines and network infrastructures.
For a Multi Location test (CDN test) which runs engines on all of our cloud providers, it's recommended to ensure that every load engine, regardless of the cloud provider, can sustain the load and bring about the desired outcome.
Step 5: Setup and Test Your Cluster
We now know how many threads we can get from one engine. At the end of this step, we'll know the number of users one Cluster (test) can get us.
A Cluster is a logical container which has one console and 0-14 engines.
Even though you can create a test with more than 14 engines, it actually creates two clusters and clones your test.
The maximum number of 14 engines per console is based on BlazeMeter’s own testing to guarantee that the console can handle the pressure of 14 engine. This creates a lot of data to process.
So, at this step, we'll take the test from step four, and change only the number of engines by raising them to 14.
Run the test for the full length of your final test. While the test is running, go to the Monitoring Report and:
- Verify that none of the engines pass the 75% CPU, 85% Memory limit
- Locate your console label (to find it, go to the Logs Report -> console’s log and look for its private IP). This should not reach the 75% CPU or 85% Memory limit.
If your console reached that limit - decrease the number of engines and run again until the console is within these limits
By the end of this step, you should know:
- The Users per Cluster you'll have
- The Hits/s per Cluster you'll reach
For more information about your Cluster's throughput, look at the statistics in the Aggregated Report.
Step 6: Use the 'Multi-Test' feature to Reach Your Maximum CC Goal
We've got to the final stage!
We know the script is working, we know how many users one engine can sustain, and we know how many users we can get from one Cluster.
Let’s assume these values:
- One engine can have 500 users
- The cluster will have 12 engines
- We aim to test for 50K users
To do this, we'll need to create 50,000 \ (500*12) = 8.3 clusters..
We could go with 8 clusters of 12 engines (48K) and one cluster with 4 engines (the other 2K) - but it's better to spread the load like this:
Instead of 12 engines per cluster, we'll use 10. Therefore, we'll get 10*500 = 5K from each cluster and we'll need 10 clusters to reach 50K.
This helps us as we:
- Don't have to maintain two different test types
- We can grow by 5K by simply adding the same cluster\test again to the 'Multi Test' configuration (5K is much more common than 6K)
We're now ready to create our final Multi-Test with 50K users:
- Select 'Create Multi-Test'.
- Add the test that you verified in step five. You can add it by using the Drag-and-Drop feature as many times as you need - it will simply duplicate itself (see image below)
- You can change the configuration of each test to load from a different region, have a different script/CSV/other file, use a different network emulation, different parameters etc.
- Your Multi-Test for 50K users is ready to go. Press start on the Multi-Test to launch all tests with 5K users from each one.
- The aggregated report of your Multi Test will start generating results in a few minutes. You can also see the results of each individual test by simply opening its report.
Setup A Multi-Test
Running The Multi-Test
1. After saving the test and pressing the 'play' button, you'll be notified that you're about to launch a new load.
2. You'll also be given the option to select 'Synchronized Start' to ensure that all the servers are up before actually starting the test. This option is useful if you're concerned that some servers or locations are significantly slower than others and you want to synchronize them.
3. After clicking on 'Launch Servers' you'll be shown the 'Booting' Window. This shows you the progression of launching the load engines and consoles across the entire Multi Test.
That's it! Your Multi-Test is up and running!