A common question that's often raised is why the average response time for a test might have been higher than what was expected or desired. This question may come in the following contexts:
- A BlazeMeter test has a higher average response time than the same test run locally.
- Two different BlazeMeter tests show different average response times.
- Two runs of the same BlazeMeter test show different average response times.
- Tests run from different locations/engine providers show different average response times.
The last section of this article will touch on each of the above scenarios, but first, let's explore an overall explanation of where BlazeMeter fits into the picture.
BlazeMeter is Merely the Messenger
First, please be aware that BlazeMeter only reports these metrics as observed by the engine from the location/provider you selected. BlazeMeter does not in any way impact or interfere with these metrics; it only reports them.
In other words: BlazeMeter has no control over response time whatsoever, nor can BlazeMeter impact response time in any way.
Waiting for a Call Back
Let's use a telephone metaphor to demonstrate why this is. Let's say you've made a phone call and left a message asking the other person to return your call.
You then wait patiently by the phone for the other person to call you back.
When the phone finally rings, you can only know how long you had to wait until the phone rang. You have no way of knowing why it took the person on the other end that long to return your call.
It is much the same for BlazeMeter. When you run a test from BlazeMeter, the system can only know how long it took your application server to respond; it cannot know why it took as long as it did.
Some Tips
To find out why your application server took as long to respond to the test engine(s) as it did, you'll have to investigate your own server and network internally with the appropriate teams that can help you troubleshoot. We can offer some pointers to help you get started, though.
(1) A BlazeMeter test has a higher average response time than the same test run locally:
Keep in mind that BlazeMeter's engines are in different geographical locations and on different networks than your local machine, so it's unlikely if not impossible for response times to be comparable between the two.
This is all the more true if your local machine is on the same network as your application server, as data has far less distance to travel in that scenario. Either way, you must keep in mind that there will always be a difference between the routes from each engine to your server than from your local machine to your server.
If you would like to keep testing locally, inside your own network, consider setting up an On-Premise Location (OPL) -- our term for your own BlazeMeter engine which you install within your own network.
(2) Two different BlazeMeter tests show different average response times:
There many possibilities as to why response times may differ between two different tests. For example, a more complex script may put a heavier strain on your application server or your network, resulting in bottlenecks that can impact response time. Keep in mind that no two test scripts are alike, so some differences are inevitable.
You can also expect a multi-test to experience higher response times than a single test, if only because a multi-test is by its very definition more resource-intensive on your server than most single tests.
(3) Two runs of the same BlazeMeter test show different average response times:
It's not uncommon for two runs of the same test to have considerably different average response times. If this happens, we recommend working with your application server and network teams to investigate what conditions may have differed between the time frames of the two runs. It's possible an internal server or network issue caused a momentary delay in getting responses out to BlazeMeter's engines.
(4) Tests run from different locations/engine providers show different average response times:
Some variance in response times are to be expected, because (a) data sent to engines in two different geographic locations have two entirely different routes to travel and (b) different providers (Google, AWS, Azure) simply provide different machines, and though they are comparable, they are not exactly the same.
In the case of the former, if the difference in response times is severe, you may need to work with your network team internally to see what may be causing bottlenecks sending data to some locations vs. other locations.
0 Comments