Here we present you the terms used in BlazeMeter and their definitions.

AWS(Amazon Web Service)

Amazon Web Services is a collection of remote computing services that together make up a cloud computing platform, offered over the Internet by BlazeMeter uses Amazons EC2 service. We enable you to launch a dedicated cluster at one of eight AWS locations.

Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud.Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously.


Platform as a Service is a category of cloud computing services that provides a computing platform and a solution stack as a service. Along with software as a service (SaaS) and infrastructure as a service (IaaS), it is a service model of cloud computing.

Cloud Computing

Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). Clouds can be classified as public, private or hybrid.

On-demand JMeter cloud

Pre-configured JMeter environment in the cloud to start running performance testing instantaneously.


Virtual Private Cloud (VPC) is an on demand configurable pool of shared computing resources allocated within a public cloud environment, providing certain level of isolation between the different organizations using the resources.

JMeter GUI

This is in reference to running a BlazeMeter script using the console's graphical user interface. You can use the JMeter GUI to edit your script, debug it and run it via the console.

Run Headless

Running headless without the console's GUI, will start JMeter immediately providing it with the uploaded test script (i.e. my_script.jmx). This configuration is highly efficient in resource allocation especially in terms of memory. You can use more engines and more intensive scripts using a headless configuration with less risk of encountering memory related issues with JMeter.

JMeter Engine/Console

BlazeMeter generates load using distributed JMeter architecture. A JMeter console is used to control the test. Each JMeter Engine will take part of generating the actual load and simulate the number of threads/virtual users specified in the script you provide. Those are all virtual servers with JMeter engines used to trigger load against a web application using scripts that mimic actual business flow.


A cluster is a collection of server in the cloud. BlazeMeter reduces the time required to set up and run tests to minutes, allowing you to quickly scale capacity, both up and down, as your testing requirements change.  


The time from sending the request, processing it on the server side, to the time the client received the first byte.

Concurrent Users

The number of users accessing the system simultaneously. You can provide this data in script or configure it before executing a BlazeMeter Test.


Setting an advance time frame to run the test automatically. It can be once only or a recurring pattern.

User Key

User key is a 20 characters alphanumeric string unique for each BlazeMeter users. This key is used for authentication while communicating with third party tools.

Private IP

By default BlazeMeter provides dynamic IP which would change on each test initiation but if your project requirement is for dedicated IP, BlazeMeter provide it at additional cost of per IP per month basis.

JTL file

JMeter can create text files containing the results of a test run. These are normally called JTL file.There are two types of JTL files XML and CSV. BlazeMeter Reports are stored in .JTL file extension.


Iteration is the number of times to perform the test.Iteration is implemented for each available thread group.

Ramp up

The time duration it will take to start all the threads you have configured.This scripting parameter is used to simulate actual user behavior when performing load test.

Thread Group

A Thread Group defines a pool of users that will execute a particular test case against your server.

Selenium Webdriver

Selenium-WebDriver makes direct calls to the browser using each browser’s native support for automation. The selenium scripts are run at predefined interval throughout the test and results for the same can be observed on webdriver tab.

This results displays actual user browsing the web application while the server is being hit with configured load.

Aggregate Report

The aggregate report has a first row with ALL row for all of the requests made during the test, and an individual row for each named request in your test. If you are writing your own JMeter script these are the labels you use in your scripts.All times are in milliseconds.

Samples(Aggregate Report)

Samples are the actual requests sent to the server as configured in the scripts. The basic common sample is HTTP Request(in JMeter) which lets you send HTTP/HTTPS request to web server. These can be viewed in Aggregate Report.

Average(Aggregate Report)

This is the average response time for each label in the script. While the test is running it will display the average of the requests already executed and the final value once test execution is finished.

Median(Aggregate Report)

This is a standard statistical measure. The Median is the same as the 50th Percentile where half of the samples are smaller than the median, and half are larger.

90% Line(Aggregate Report)

90th Percentile. 90% of the samples were smaller than or equal to this time.

99% Line(Aggregate Report)

99th Percentile. 99% of the samples were smaller than or equal to this time.

Min(Aggregate Report)

The minimum (shortest time) for that label. While the test is running it displays value based on samples already completed and a final value after completion of test execution.

Max(Aggregate Report)

The longest time for that label. While the test is running it displays value based on samples already completed and a final value after completion of test execution.

Error %(Aggregate Report)

The percent of requests with errors. While the test is running it displays value based on samples already completed and a final value after completion of test execution.

Hit/s(Aggregate Report)

The HITS/s KPI reported by the report depends on your script configuration. BlazeMeter counts unique requests that appear in the JTL file generated during the test. This means that if only high level requests are present in the JTL file, the HITS/s figure relates only to the high level requests. If while configuring the test, you select to include sub-samples in your runs, then HITS/s represents all high level requests and sub-samples (e.g. images, CSSs, JSs etc).

The way to identify how your test is configure is by the presence of the label “OTHERS”. the label “OTHERS” includes all of the samples that do not have a unique label (e.g. sub-samples).

If the test is run using a single console (no engines) – The sub-sample reporting is turned on automatically. If you use more than a single console, you need to turn this property on in order to count the sub-samples as well as the high-level requests.

KB/s(Aggregate Report)

Throughput is measured in bytes and represents the amount of data that the Virtual users received from the server.Throughput KPI is measured in kilobytes(KB) per seconds.

Have more questions? Submit a request


Please sign in to leave a comment.