Note: This feature is currently in beta.
Running a load test is a great way to measure application server performance. To have even better understanding of real user experience under a load, enable the End User Experience Monitoring option in BlazeMeter.
This feature executes a Selenium test in the background while your load test is running, via Taurus. The Selenium test generates a Waterfall Report that shows what a user would see in their web browser at different points during the load test. This can be especially helpful when trying to debug why a certain page failed to load properly from a user point of view at a certain point in the load test.
- How Monitoring Works
- Configure End User Experience Monitoring
- Run the Test
- View the Report
- View the End User Experience Monitoring Tab
- View the Waterfall Report
When a test is executed with the End User Experience Monitoring enabled, BlazeMeter will wrap the label and the URL specified with a YAML configuration file. Alternatively, you can supply a YAML configuration of your own. Then, BlazeMeter will execute the script via Taurus and Selenium. The script, containing only the URLs specified, will run for the full duration of the load test.
Configure End User Experience Monitoring
Follow these steps:
- In the Performance tab, click Create Test, Performance Test.
- Scroll down to the End User Experience Monitoring section and toggle the button ON.
- Click one of the tabs:
Follow these steps:
- In the URL list tab, specify at least one URL to monitor during the test, which is the default option.
Note: You must include "http://" or "https://" with each URL.
- Enter a label name to identify the URL.
A line to add another URL appears automatically.
Note: One URL can be monitored for free-tier plans. Additional URLs can be monitored for paid plans, depending on the type of plan you have.
Upload Selenium Scenarios
Instead of specifying the URLs, you can upload your own Selenium scenarios to run via Taurus.
Follow these steps:
- Create a Taurus YAML configuration to execute your Selenium scenario and upload it. Ensure that the main test script is still selected as the main file (shown under the "Start test with:" text), not the Selenium scenario.
Note: End User Experience Monitoring only supports YAML files in which the Selenium scenario is scripted within the YAML. It does not support pointing a YAML to a separate script, such as one written in Java or Python.
- In the Selenium Scenarios tab, check the box for script or scripts that you want to execute as monitoring tests.
After specifying the URLs or uploading your own Selenium scenarios, click Run Test.
Once the test starts, you will see both the load test and the user experience monitoring test executed simultaneously.
Once the test report appears, you will see the usual report tabs, along with the new End User Experience Monitoring tab.
There are two ways to view the End User Experience Monitoring report:
- If you want to go straight to the monitoring report, navigate to the End User Experience Monitoring tab.
- If you want to see monitoring results alongside other test metrics first, navigate to the Timeline Report tab. Under the KPI Selection on the left, expand the Real User Experience section.
You will find your named label(s) for your monitored URL(s). Select a label and it will be displayed as a series of dots on the graph.
The vertical Y axis represents page load time in the web browser. The horizontal X axis represents time. Each single dot represents the execution time of the Selenium test at a specific moment in time, where higher dots represent longer execution time.
Click a dot to open the End User Experience Monitoring tab.
Regardless of which of the two methods you choose to follow from above, click the End User Experience Monitoring tab to see a graph consisting of individual timeline columns.
Each column in this graph for a label correlates with each dot in the Timeline Report for the same label.
To open a waterfall report for that moment in time, click a column in the monitoring report.
The waterfall report shows you what your users experience when your site is under load. More specifically, the time shown for each horizontal bar refers to the page load time on the network level. For example, you might find that your backend can handle the load but the page takes ten seconds to reach a state that is adequate for a user to interact with. The waterfall report can aid you in uncovering performance issues like page loading in the browser - issues JMeter alone would never have been able to identify.
As you review the waterfall report, you can click to expand each performed request to view more details about it. This is similar to what you would see if you were to open the developer tools for a real browser and examine the network tab.
You can also hover your mouse over each graph in the waterfall report to see expanded information on request phases and their elapsed times.
The waterfall report reveals how long each request took and which requests had the most impact on page load time.
Jump to Next Section: