Once a GUI Functional Test executes, the test report will soon be available for review. This report is divided into "Summary" and "Details" tabs, the latter of which is divided further into its own tabs: "Video" or "Selenium Commands", "Waterfall", "Logs", and "Metadata".
Upon first viewing the report, you will be greeted by the Summary tab, which will display the video of your script executing in the cloud engine's browser.
Note: If you set blazemeter.videoEnabled to "no" in your Selenium script, only the list of Selenium commands executed will appear instead of a video.
The Details Tab
The Details tab, the first tab displayed upon viewing the report for the first time, provides -- as one might expect -- a wealth of detail about the Selenium test.
Note: You'll likely initially see a spinning progress icon instead while the test starts running. This is ok! The test needs to execute for a time before any data can be displayed.
The Details tab is divided into four sections, listed on the left of the report, and detailed in the following sections below.
Video / Selenium Commands
The Video section, the first displayed when viewing the Details tab, shows both the video recording of your Selenium test executing (in the engine's browser) and a list of all Selenium steps the script executed, organized by test case/suite.
If the video option was disabled in your script, then "Video" will be replaced by "Selenium Commands" and only the list of Selenium steps will be displayed.
You can play the entire video to watch the recording of the script executing from start to finish or you can click individual steps in the Selenium commands list to the right, which will allow you to jump to the timestamp in the video in which that specific step executed.
The commands are displayed in one of two modes, selected via the toggle located at the top-right of the commands list:
- Test Steps Only shows only the commands from the script.
- All Commands shows all the commands necessary to launch and run the test.
This is especially useful for debugging failed steps. By clicking a command in the list, you can jump to the point in the video in which it executed and the failure occurred.
In addition to video timestamps you can view screenshots, one for each URL navigated by the script. The screenshot icon is located to the right of each URL. No screenshots are recorded after a go or open action that goes to a different domain; to work around this, close and reopen the extension after going to a new domain.
And to the right of each screenshot icon is a link icon which you can click to copy the URL to your own clipboard.
Look for a Webdriver closed command to appear at the end of the list. If this command does not appear, then there was a problem with the test.
The waterfall report shows the page load time on the network level for each step, which can be useful when looking for any pages that might take too long to load and thus result in a delay in the test.
The waterfall report can aid you in uncovering performance issues. As you review the waterfall report, you can click to expand each performed request to view more details about it. This is similar to what you would see if you were to open the developer tools for a real browser and examine the network tab.
You can also hover your mouse over each graph in the waterfall report to see expanded information on request phases and their elapsed times.
The logs section provides a thorough list of possible logs to refer to.
Click the "Select a log file" field to open a drop-down menu which will display all logs available for download.
The logs available in this list will vary depending on what options you had chosen when configuring your script and what type of user you are. For example, admin users will have more logs available to view than standard users.
The metadata section displays detailed information pertaining to the test itself.
Here you can find some useful data about the test, such as its session ID, driver used, browser used, etc.
In the event a command had trouble executing during the test, it will be flagged with an orange exclamation mark icon. If you click this icon it will lead to the Errors tab (which only appears if any errors occurred) to display more details.
Note: Each command's response includes a delay that is equal to the implicit wait timeout. To shorten this delay, which defaults to 60 seconds, you can change the webdriver's implicit wait parameter within your test script. We recommend setting it to 30 seconds.
The Summary Tab
The Summary tab provides a high-level overview of the test's execution. Here you'll find useful information such as how long the test took to execute, how many test cases were included, how many test cases succeeded, etc.
The Summary report is divided into two sections.
First, the Latest Runs section displays the cumulative statistics of all test sessions.
Latest Runs includes the following:
- Test Cases Passed - The total "pass" percentage for all test cases in all sessions.
- Number of Test Cases - The total number of unique test cases executed by the script.
- Number of Test Suites - The total number of unique test suites executed by the script.
- Duration - The difference between the "Started" date/time and the "Ended" date/time. Be aware this will span multiple test launches.
- Started - The date and time the first command of the first session was executed.
- Ended - The date and time the last command of the last session was executed.
Optionally, you can add your own notes in the "Add Report Notes..." field, then click the "Save" button to save them to that specific report.
The Test Cases section, which provides additional details specific to each test case within the test.
From left to right, this section displays the test suite, the test cases belonging to that suite, then green check or red "x" marks for each test case to show if the specific test case passed or failed.
From left to right, this section displays the test suite, the test cases belonging to it, and each case's state. Test cases may have the following states:
- Passed, designated by a green checkmark.
- Failed, designated by a red X.
- Undefined, designated by a blue dash, meaning no status was set in the script.
- Broken, designated by an orange exclamation mark, meaning the test could not be executed. Click the exclamation mark to navigate to that test case's description within the Details tab.
Lastly, if you click the browser name in this table, you will be taken to the Details tab for that test.