When executing your JMeter test through Blazemeter, it's of course important to verify that all users (threads) you intended to run actually did run. When reviewing a test report, there are two totals that will be of interest:
- How many total users ran?
- What was the highest total of users that ran simultaneously?
In Blazemeter, we refer to these answers in terms of total users and max users, respectively. When reviewing your test report and assessing if your test ran as intended, it is important to understand the difference between these two totals and where to find the count for each. We'll discuss each of these two totals in detail here.
Max Users
As mentioned in the Summary Report guide, Max Users is a total of the maximum number of concurrent users generated at any given point in the test run.
This does NOT refer to the total users; rather, only the total users who ran simultaneously at any given moment. As a result, Max Users may not match your total users, which may be significantly higher.
For example, consider a test configured to run a total of 750 users.
When this test executes, if the Summary Report shows a Max Users value of 700 VU (Variable Unit), then that means of the 750 total users that ran, no more 700 of those users ran simultaneously.
There may be times when you want to achieve a higher Max Users count, or you may even want your Max Users to match your total users (in other words, ensure all users run concurrently). If this is the case, please review the guide How Can I Achieve a Higher Count of Concurrent Users?
Total Users
Whereas Max Users may be a fraction of your total users, there may be times when you want to verify that all users executed, in which case you will want to verify the grand total of users that ran, not just those that ran concurrently.
The total users count is not displayed in the Blazemeter test report UI, but JMeter records all users that executed, so the total can be easily verified. To do so, perform the following steps:
- Navigate to the Logs tab of your test report.
- Click the Log: drop-down menu and select artifacts.zip.
- Click the artifacts.zip link that appears to download the zip.
- Open the downloaded artifacts.zip, then find and open the jmeter.log in a text editor.
- The log will show each thread (user) that executed, one line at a time, so that you can verify the total users that ran by verifying the highest number thread finished. For example, if you intended for a total of 25 users to run across one thread group, then you would look for verification that the last thread finished, which would appear as follows in the log:
INFO o.a.j.t.JMeterThread: Thread is done: Thread Group 1-25
Note: If your test spread your total users across multiple engines, then while viewing the Logs tab of the report, use the Load Engines: drop-down menu to select each engine, one at a time, and download the respective artifacts.zip for each.
Once you have the artifacts for each engine, it's simply a matter of math: note how you divided up the users per each engine in your original test configuration, as visible in the Load Distribution section of your test configuration.
For example, assume you configure a test to run 750 total users, distributed as 25 users per engine, which would total to 30 engines (25 users x 30 engines = 750 total users). We will assume the test has only one thread group for the purpose of this example.
In this example scenario, to verify all 750 users executed, you can check each engine's jmeter.log to verify each engine shows "Thread Group 1-25" completing. If all engines' jmeter.log files show execution of "Thread Group 1-25", then all 750 users executed.
Note: In this example, downloading and checking 30 jmeter.log files might feel a bit excessive. This is rarely necessary, as it is rare for a Blazemeter test to only execute a fraction of the total users. If some users did fail to run, it will usually be apparent via other errors displayed in the test report, such as a problem observed in the Engine Health Report and/or the Errors Report, which can often be resolved/prevented via the test calibration process.
0 Comments