NOTE: This feature is currently in beta.
Running a load test is a great way to measure application server performance, but what if you want to know what the real user experience (UX) is, under a load?
Now you can, by enabling the End User Experience Monitoring option in BlazeMeter!
This new feature executes a Selenium test in the background while your load test is running, via Taurus. The Selenium test generates a Waterfall Report that shows what a user would see in their web browser at different points during the load test. This can be especially helpful when trying to debug why a certain page failed to load properly from a user point of view at a certain point in the load test, for example.
Currently, BlazeMeter offers URL monitoring abilities, but stay tuned for more (details at the end).
In this article we will learn:
- How Monitoring Works
- Choose a URL or Selenium Scenario to Monitor
- Run the Test
- View the Report
- End User Experience Monitoring Tab
- Waterfall Report
- Coming Soon
When a test is executed with the "End User Experience Monitoring" feature enabled, BlazeMeter will wrap the label + the URL specified with a YAML configuration file. Alternatively, you can supply a YAML configuration of your own. Then, BlazeMeter will execute the script via Taurus and Selenium. The script, containing only the URLs specified, will run for the full duration of the load test. Now let's learn how to run End User Experience tests.
To get started, first enable the feature. There are then two options available for configuring your end user experience monitoring.
Method 1: URL List
You can simply specify at least one URL to monitor during the test, which is the default option. Include a label name to identify the URL. (Note: You must include "http://" or "https://" with each URL.)
After specifying a label and URL, the plus (+) button will be enabled, which can be used to add additional URLs if needed. One URL can be monitored for free-tier plans, and additional URLs can be monitored for paid plans, depending on the type of plan you have.
Method 2: Selenium Scenarios
Instead of specifying URLs, you can upload your own Selenium scenarios to run via Taurus instead. First, create a Taurus YAML configuration to execute your Selenium scenario, then upload it, but make sure the main test script is still selected as the main file (shown under the "Start test with:" text), not the Selenium scenario.
Note: End User Experience Monitoring only supports YAML files in which the Selenium scenario is scripted within the YAML. It does not support pointing a YAML to a separate script, such as one written in Java or Python.
Next, enable the End User Experience Monitoring option, then click the Selenium Scenarios tab.
Check the script or scripts you wish to execute as monitoring tests. It's as simple as that!
Once the test starts, you'll see both the load test and user experience monitoring test executed simultaneously.
Once the test report appears, you'll be presented with the usual report tabs, along with the new End User Experience Monitoring tab.
There are two ways to view the End User Experience Monitoring report:
(1) Navigate to the End User Experience Monitoring Tab if you want to jump straight to the monitoring report.
(2) Navigate to the Timeline Report tab first if you prefer to see monitoring results alongside other test metrics. Under the KPI Selection on the lefthand side, expand the Real User Experience section.
You will find your named label(s) for your monitored URL(s). Select a label, and it will be displayed as a series of dots on the graph.
The Y (vertical) axis represents page load time in the web browser. Each single dot represents the execution time of the Selenium test at a specific moment in time, where higher dots represent longer execution time.
Click a dot to open the End User Experience Monitoring tab.
Regardless of which of the two methods you choose to follow from above, once you reach the End User Experience Monitoring tab, you will be greeted with a graph consisting of individual timeline columns.
Each column in this graph for a label correlates with each dot in the Timeline Report for the same label.
Clicking a column in the monitoring report will open a waterfall report for that moment in time.
The waterfall report shows you what your users experience when your site is under load. More specifically, the time shown for each horizontal bar refers to the page load time on the network level. For example, you might find that your backend can handle the load but the page takes ten seconds to reach a state that is adequate for a user to interact with. The waterfall report can aid you in uncovering performance issues like page loading in the browser - issues JMeter alone would never have been able to identify.
As you review the waterfall report, you can click to expand each performed request to view more details about it. This is similar to what you would see if you were to open the developer tools for a real browser and examine the network tab.
You can also hover your mouse over each graph in the waterfall report to see expanded information on request phases and their elapsed times.
The waterfall report reveals how long each request took and which requests had the most impact on page load time.
Jump to Next Section:
NTC Feedback? Bug to report?
A number of features are currently being worked on to further enhance End User Experience Monitoring in the future. Please stay tuned and keep an eye out for these additional features currently being planned:
- The BlazeMeter Chrome Extension will eventually allow taking a recorded combined JMeter + Selenium script and running it in BlazeMeter as a load test with End UX Monitoring.
- A feature to display screenshots of the browser at different points in time of the test in a filmstrip‐like view.