What are feeders in Gatling?

Generally Speaking, Feeder is a type alias for the function Iterator[Map[String, T]], which means that any component created by the feed method will record Map[String, T] and then inject its content.

Building a custom feeder is not a tough task. As we see in the below example, one is able to create a random email generator:

import scala.util.Random
val feeder = Iterator.continually(Map("email" -> (Random.alphanumeric.take(20).mkString + "@foo.com")))

The DSL provides a feed.


It defines a workflow step, where every virtual user feeds on the same Feeder. Each time, any of the virtual users reaches the step of feeding, it will drag out a record from the Feeder, which will be then injected into the user's session, creating a new Session instance.

This may be noted that in case the Feeder can't produce enough records, the Gatling will generate a complaint about it and it will lead to halting the simulation or maybe stopped permanently. It is possible to feed multiple records at once. If it happens, the name of the attributes will be suffixed.

For example, if the column names are foo and bar, as shown in our example, we are feeding two records at once, we should get session attributes such as "foo1" and "bar1" and "bar2".

And the code snippet will be: feed(feeder, 2). The 2 inside as a parameter signifies that the number of records at a time was two.


The module Array[Map[String, T]] can have the same functionality as the IndexedSeq[Map[String, T]] can be completely converted into a Feeder. Also, this complete conversion will provide us some additional methods for defining the sequence will be iterated:

.queue // default behavior: use an Iterator on the underlying sequence
.random // randomly pick an entry in the sequence
.shuffle // shuffle entries, then behave like queue
.circular // go back to the top of the sequence once the end is reached

Let us see in our example of how we create a Feeder:

val feeder = Array(
  Map("foo" -> "foo1", "bar" -> "bar1"),
  Map("foo" -> "foo2", "bar" -> "bar2"),
  Map("foo" -> "foo3", "bar" -> "bar3")

CSV Feeders

Gatling has many builtins for reading character separated values files. All files are to be placed in the data directory in the Gatling.

This is important to keep in mind that by default, our parser abides the RFC4180, so we must not expect behaviors that do not support this specification. However, in this case, the header fields get trimmed of wrapping white spaces. Let us see the creation of the csvFeeder:

val csvFeeder = csv("foo.csv") // use a comma separator
val tsvFeeder = tsv("foo.tsv") // use a tabulation separator
val ssvFeeder = ssv("foo.ssv") // use a semicolon separator
val customSeparatorFeeder = separatedValues("foo.txt", '#') // use your own separator

These builtins returns the instances of RecordSeqFeederBuilder. It means that the complete file is loaded in memory and parsed, and so the resulting feeders don't read on disk during the simulation run.

It should be mentioned that a user can specify an escape character, so that content character is not confused for separators. Example:

val csvFeeder = csv("foo.csv", escapeChar = '\\')

JSON Feeders

There is a flexibility of using data in JSON format instead of CSV. As shown below:

val jsonFileFeeder = jsonFile("foo.json")
val jsonUrlFeeder = jsonUrl("http://me.com/foo.json")

Therefore, for this case consider the following JSON Code:


The JSON code will be turned into the following code:

record1: Map("id" -> 19434, "foo" -> 1)
record2: Map("id" -> 19435, "foo" -> 2)

It may be kept in mind that the root element has to be an array.

JDBC Feeder

Gatling provides a builtin that reads from a JDBC connection.

// keep in mind to import the JDBC Module
import io.gatling.jdbc.Predef._

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users")

It may be noted that, just like any other parser built-ins, this too returns an instance of RecordSeqFeederBuilder The following things have to be kept in mind:

  • The databaseURL must be a JDBC URL (e.g. jdbc:postgresql:gatling).
  • The credentials to access the database is the user name and the password.
  • The values needed will be retrieved by the SQL query.

Sitemap Feeder

Gatling also supports a feeder that reads data from a SiteMap file.

// remember to import the http module
import io.gatling.http.Predef._val feeder = sitemap("/path/to/sitemap/file")

The following code as shown below:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">



The above code will be transformed into:

record1: Map(
           "loc" -> "http://www.example.com/",
           "lastmod" -> "2005-01-01",
           "changefreq" -> "monthly",
           "priority" -> "0.8")

record2: Map(
           "loc" -> "http://www.example.com/catalog?item=12&desc=vacation_hawaii",
           "changefreq" -> "weekly")

record3: Map(
           "loc" -> "http://www.example.com/catalog?item=73&desc=vacation_new_zealand",
           "lastmod" -> "2004-12-23",
           "changefreq" -> "weekly")

Redis feeder

Gatling reads data from Redis using the Redis Commands. Some are listed below:

  • SPOP: Remove and return any random element in the list.
  • LPOP: Remove and return the first element in the list.
  • SRANDMETER: Returns a random element from the set.

In default RadisFeeder uses LPOP Command: As follows:

import com.redis._
import io.gatling.redis.feeder.RedisFeeder

val redisPool = new RedisClientPool("localhost", 6379)

// use a list, so there's one single value per record, which is here named "foo"
val feeder = RedisFeeder(redisPool, "foo")

A third parameter may be used to specify the desired Redis Command. As shown below:

// read data using SPOP command from a set named "foo"
val feeder = RedisFeeder(clientPool, "foo", RedisFeeder.SPOP)

It is interesting to note that, the Redis version of v2.1.14, supports the mass insertion of data from a file. That is why it becomes possible, to load thousands of keys within a few seconds and so Gatling reads them directly from memory.

Let us take an example where a simple Scala function generates a file with one thousand different URLs ready to be loaded in a Redis list named URLs.

import java.io.{ File, PrintWriter }
import io.gatling.redis.util.RedisHelper._

def generateOneMillionUrls(): Unit = {
  val writer = new PrintWriter(new File("/tmp/loadtest.txt"))
  try {
    for (i <- 0 to 1000) {
      val url = "test?id=" + i
      // note the list name "URLS" here
      writer.write(generateRedisProtocol("LPUSH", "URLS", url))
  } finally {

It may be noted that the URLs can now be loaded in Redis using the below command:

`cat /tmp/loadtest.txt | redis-cli --pipe`


It is required sometimes that we may convert the raw data, we get from the feeder. Like, a CSV feeder would give us Strings only, but we may need to convert one of the attributes into an integer.

Therefore, try to look at the below conversion of an attribute into an integer.

convert(conversion: PartialFunction[(String, T), Any])

I take a PartialFunction, which means that we can only define it for the scope we want to convert. And the nonmatching attributes will be left unchanged whose input is a string. For example:

csv("myFile.csv").convert {
case ("attributeThatShouldBeAnInt", string) => string.toInt

Data Problem

There are two types of retrieval that may be required. In one, we may require non-shared, and in the other, we may require User dependent Data.

Non-shared Data :

Sometimes we may want all virtual users to play all the records in a file. But our Feeder may not have this functionality.

Therefore, to solve this problem, we need to add the attribute flattenMapIntoAttributes. As shown in the below example:

val records = csv("foo.csv").records
foreach(records, "record") {

User Dependent Data :

We may require to filter the injected data based on some information from the session. In this case, also, Feeder may not be able to do this, being just an iterator. Therefore, it is unaware of the context. That's the moment when we have to write our own injection logic.

To understand better, let us take an example. We have two files and we want to inject data from the second one, based on what has been injected from the first one: In the userProject.csv:

user, project
bob, aProject
sue, bProject

And in projectIssue.csv:


This is how we can randomly inject an issue, based on the Project:

import io.gatling.core.feeder._
import java.util.concurrent.ThreadLocalRandom

// index records by project
val recordsByProject: Map[String, IndexedSeq[Record[String]]] =
  csv("projectIssue.csv").records.groupBy { record => record("project") }

// convert the Map values to get only the issues instead of the full records
val issuesByProject: Map[String, IndexedSeq[String]] =
  recordsByProject.mapValues { records => records.map { record => record("issue") } }

// inject project

  .exec { session =>
    // fetch project from  session
    session("project").validate[String].map { project =>

      // fetch project's issues
      val issues = issuesByProject(project)

      // randomly select an issue
      val selectedIssue = issues(ThreadLocalRandom.current.nextInt(issues.length))

      // inject the issue in the session
      session.set("issue", selectedIssue)
0 results
Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions