What is the Assertion?

The assertion is an API. Assertions are uses to verify that some global statistics like the number of failed requests is matching the exceptions for a complete simulation. Assertions are registered for a simulation using the method named: assertion in the setUp.

Following is an example of using the assertion function:


It may be noted that this method may take many assertions that we like. The assertion API provides a dedicated DSL for chaining some steps. They are summarized below:

  • defining the scope of the assertion.
  • selecting the statistic.
  • selecting the metric.
  • defining the condition.

scope of the assertion.

Defining the scope of the assertion

An assertion is able to test statistics calculated from all requests or some time from a part of it. Below, we have seen that how differently may the scope of the assertion can be defined.

  • global: using statistics calculated from all the requests.
  • forAll: using statistics calculated for each individual request.
  • details(path): using statistics calculated from a group.

Let us see the below example of performing an assertion on the request Index in the group Search. This can be done by:

details("Search" / "Index")

Different statistics

The different statistics

  • responseTime: This will target the response time in milliseconds.
  • allRequests: This targets the number of requests.
  • failedRequests: This will target the number of a failed requests.
  • successfulRequests : This will target the number of successful requests.
  • requestsPerSec : This will target the rate of request per second.

Selecting the metrics

Selecting the metric

There are different parameters that we call our metrics and are relevant to response time only.

  • min: This performs the assertions on the minimum of the metric.
  • max: This performs the assertions on the maximum time of the metric.
  • mean: This performs the assertions on the mean of the metric.
  • percentile1: This performs assertions on the 1st percentile of the metric, as configured in gatling.conf. However, the default is the 50th.
  • Percentile2: This performs the assertion on the 2nd percentile of the metric, which was configured in gatling.conf. It should be mentioned that the default is 75th. Similar is the case for the percentile3 and the percentile4 which in default percentile number is the 95th and 99th respectively.
  • percentile(value: Double): This performs the assertions on the given percentile of the metric. This is to keep in mind that the parameter is a percentage value, which is between 0 and 100.



The use of Conditions is that many different conditions can be applied to the same metric, making it more reliable to use a metric differently.

Let us see the different ways of how we can condition the metric using some keywords or better said as functions. Some are listed below:

  • lt(threshold): This will check whether the value of the metric is less than the threshold.
  • lte(threshold): This will check whether the value is less than or is equal to the threshold.
  • gt(threshold): This will check whether the value of the metric is greater than the given threshold.
  • gte(threshold): This will check whether the value of the metric is greater or is equal to the threshold.
  • between(thresholdMin, thresholdMax): This checks whether the value is between the two thresholds.
  • between(thresholdMin, thresholdMax, inclusive=false)This check whether the value is between the two given bound or the threshold, without considering the extreme values or the boundary values.
  • is(value): This checks whether the value of the metric is equal to the given value.
  • in(sequence): This checks that the value of the metric is in a sequence.

Let us see how we can use assertions, below is a listed example of using assertions:

// Assert that the max response time of all requests is less than 100 ms

// Assert that every request has no more than 5% of failing requests

// Assert that the percentage of failed requests named "Index" in the group "Search"
// is exactly 0 %
setUp(scn).assertions(details("Search" / "Index").failedRequests.percent.is(0))

// Assert that the rate of requests per seconds for the group "Search"
setUp(scn).assertions(details("Search").requestsPerSec.between(100, 1000))



Gatling generates two reports in the js directory, if the simulations define assertions, they are: One is a JSON file and the other is the JUnit file.

Generation of reports with steps:

  • Editing the simulation file:
    Go to the directory user file inside Gatling's folder and open the test Scala file in an editor.
    For example, use simple assertions inside the setup function.


      • Writing the assertion function inside the scala file :
        In this example, we will use only two modules of the assertion. One is the global.responseTime.max.lt(50) , forAll.failedRequests.count.lt(5) and details("Search Request").successfulRequests.percent.gt(90)


      • Running the Simulation :
          After the file, a scala file has been added and assertion has been added. Run the edited test script the similar way we did for the running of the normal test script.


      • Reading the Gatling Assertion Report:

      Go to the js directory which is inside the directory result of Gatling's directory. Inside the js directory, we have our JSON file created. We may open it now, with the help of an editor, and see the result.


      The latter can be used with Jenkins's JUnit Plugin. Below is an example:

      	"path": "Global",
      	"target": "max of response time",
      	"condition": "is less than",
      	"expectedValues": [50],
      	"result": false,
      	"message": "Global: max of response time is less than 50",
      	"actualValue": [145]
      	"path": "requestName",
      	"target": "percent of successful requests",
      	"condition": "is greater than",
      	"expectedValues": [95],
      	"result": true,
      	"message": "requestName: percent of successful requests is greater than 95",
      	"actualValue": [100]
      0 results
      Comment / Suggestion Section
      Point our Mistakes and Post Your Suggestions