You may find that when you run a test with a low number of threads (users), the test will execute successfully, but once you begin scaling up to a higher load, the test may fail or return unexpected errors. This is often a sign that your test is in need of calibration, which must be performed to ensure a test will reliably run at higher loads.
This article will offer some advice for properly calibrating a Taurus test for getting the best results in BlazeMeter. Taurus is an open source test automation framework, which enables running 20+ open source testing tools, easily. Learn more here.
Overview
- Step 1: Write Your Script
- Step 2: Test it Locally with Taurus
- Step 3: Test in BlazeMeter
- Step 4: Run Your Test
- Coming Soon: Use the 'Multi-Test' feature to Reach Your Maximum CC Goal
Step 1: Write Your Script
There are various ways to create your script, which include:
- Creating a new JMeter script via a Taurus YAML file. For details on how to do so, please refer to the Taurus article Creating a JMeter Script Using YAML.
- Creating a new YAML configuration file that references an existing script (such as a script from JMeter, Selenium, or Gatling). There is a list of articles about Learning Taurus that provide details on each option, such as How to Run an Existing JMeter Script.
- Using the Blazemeter Proxy Recorder or Blazemeter Chrome Extension to record your script.
If you generate a JMeter script from a recording, keep in mind that:
- You'll need to change certain parameters, such as Username & Password. You can also set a CSV file with those values so each user can be unique.
- You might need to extract elements such as Token-String, Form-Build-Id and others, by using Regular Expressions, the JSON Path Extractor, or the XPath Extractor. This will enable you to complete requests like "AddToCart", "Login" and more. See this article regarding these procedures.
- You should keep your script parameterized and use configuration elements like HTTP Requests Defaults to make your life easier when switching between environments.
Step 2: Test Your Script Locally with Taurus
Begin debugging your script with one thread and one iteration.
execution:
concurrency: 1 hold-for: 2m30s ramp-up: 1m
iterations: 1 scenario: Thread Group
scenarios:
Thread Group:
requests:
- label: blazedemo
method: GET
url: http://blazedemo.com/
When the test is running, watch for any items that may be listed under the "Errors" section. Also monitor and be mindful of any irregularities with the "Connect" or "Latency" values.
After your local Taurus test completes, check the bzt.log and jmeter.log files that were generated for any errors or unexpected behavior.
You can examine test results in JMeter by opening the kpi.jtl file, which can be viewed via the "View Results Tree" listener. You can also view the JMX file that Taurus automatically created (if you created a new test via the YAML) or modified (if you ran an existing test via the YAML) by reviewing the JMX file the test generated.
After the script has run successfully using one thread, raise it to 0-20 threads for ten minutes and check:
- Are the users coming up as unique (if this was your intention)?
- Are you getting any errors?
- If you're running a registration process, take a look at your backend. Are the accounts created according to your template? Are they unique?
- Check test statistics under "Cumulative Stats". Do they make sense (in terms of average times, hits)?
Step 3: Run and Analyze Your Performance Test in BlazeMeter
Now that your test is running successfully locally, you're ready to test it in the cloud! There are two ways you can run your Taurus test via BlazeMeter:
- You can upload your test to BlazeMeter as described in our article Creating a New Taurus Test.
- You can edit the YAML script to automatically run in the BlazeMeter cloud, as detailed in Scaling With Cloud Provisioning.
Setup The Amount of Users-Per-Engine Using One Engine
Now that we're sure the script runs flawlessly in BlazeMeter, we need to figure out how many users we can apply to one engine. (For details on how to configure users per engine in Taurus, please refer to the article Load Settings for Cloud.)
Set your test configuration to:
- Concurrency: 300
- Ramp-up: 15 minutes
- Hold-for: 5 minutes
- Do not use any local engines
- Use 1 cloud engine
- Run the test and monitor your test's engine via the Monitoring Report.
execution:
concurrency: 300 hold-for: 5m ramp-up: 15m scenario: Thread Group
locations:
us-east-1: 1
scenarios:
Thread Group:
requests:
- label: blazedemo
method: GET
url: http://blazedemo.com/
If your engine didn't reach either a 75% CPU utilization or 85% memory usage (one time peaks can be ignored):
- Change the number of threads to 700 and run the test again
- Raise the number of threads until you get to 1,000 threads or 60% CPU
If your engine passed the 75% CPU utilization or 85% Memory usage (one time peaks can be ignored):
- Look at when your test first got to 75% - and see how many users you had at this point.
- Run the test again. This time, instead of putting a ramp-up of 500, enter the number of users you got from the previous test
- Set the ramp-up time you want for the real test (300-900 seconds is a great start) and set the duration to 50 minutes.
- Make sure you don't go over 75% CPU or 85% memory usage throughout the test
You can also go safer and decrease 10% of the threads per engine.
Note: An important detail to be aware of is that the user limit set for your subscription plan is not the same as the user limit one engine can handle, which will vary from one test script to the next. For example, if your plan allows for 1,000 users, that does not guarantee that all 1,000 users can execute on one engine.
Coming Soon: Use the 'Multi-Test' Feature to Reach Your Maximum CC Goal
Taurus V4 will soon support multi-tests. This feature is in the works and will be available soon, so please stay tuned to this space for future details!
Step 4: Run Your Test
With your calibration testing complete, you're now ready to run your real test! Now you can:
- Configure the actual amount of users you require.
- Set the number of engines you need for handling the load.
Ensure you keep the users per engine the same as your final successful result in your calibration tests from Step 3.
Consider the following example, where we run a 1,000-user test using five engines:
execution:
concurrency: 1000 hold-for: 2h ramp-up: 30m scenario: Thread Group
locations:
us-east-1: 2
us-west-1: 3
scenarios:
Thread Group:
requests:
- label: blazedemo
method: GET
url: http://blazedemo.com/
Once you're ready, press the "Run Test" button!
For any issues, you may report them to support@blazemeter.com - Thanks!
0 Comments