What are feeders?

Generally Speaking, Feeder is a type alias for the function Iterator[Map[String, T]], which means that any component created by the feed method will record Map[String, T] and then inject it's content.

Building a custom feeder is not a tough task. As we see in the below example, one is able to create a random email generator:


import scala.util.Random
val feeder = Iterator.continually(Map("email" -> (Random.alphanumeric.take(20).mkString + "@foo.com")))		

The DSL provides a feed.


feed(feeder)

It defines a workflow step, where every virtual user feed on the same Feeder. Each time, any of the virtual users reaches the step of feeding, it will drag out a record from the Feeder, which will be then injected into the user's Session, creating a new Session instance.

This may me noted that, in case the Feeder can't produce enough records, the Gatling will generate a complain about it and it will lead to halting the simulation or may be stopped permanently. It is possible to feed multiple records at once. If it happens, the name of the attributes will be suffixed.

For example, if the column names are foo and bar, as shown in our example, we are feeding two records at once, we should get session attributes such as "foo1" and "bar1" and "bar2".

And the code snippet will be: feed(feeder, 2). The 2 inside as parameter signifies that the number of records at a time was two.

RecordSeqFeederBuilder

The module Array[Map[String, T]] can have the same functionality as the IndexedSeq[Map[String, T]] can be completely converted into a Feeder. Also, this complete conversion will provide us some additional methods for defining the sequence will be iterated:


.queue // default behavior: use an Iterator on the underlying sequence
.random // randomly pick an entry in the sequence
.shuffle // shuffle entries, then behave like queue
.circular // go back to the top of the sequence once the end is reached

Let us see in our example of how we create a Feeder:



val feeder = Array(
  Map("foo" -> "foo1", "bar" -> "bar1"),
  Map("foo" -> "foo2", "bar" -> "bar2"),
  Map("foo" -> "foo3", "bar" -> "bar3")
).random

CSV Feeders

Gatling has many builtins for reading character separated values files. All files are to be placed in the data directory in the Gatling.

This is important to keep in mind that by default, our parser abide the RFC4180, so we must not expect behaviors that does not support this specification. However in this case, the header fields get trimmed of wrapping white spaces. Let us see the creation of the csvFeeder:



val csvFeeder = csv("foo.csv") // use a comma separator
val tsvFeeder = tsv("foo.tsv") // use a tabulation separator
val ssvFeeder = ssv("foo.ssv") // use a semicolon separator
val customSeparatorFeeder = separatedValues("foo.txt", '#') // use your own separator

These builtins returns the instances of RecordSeqFeederBuilder. It means that the complete file is loaded in memory and parsed, and so the resulting feeders doesn't read on disk during the simulation run.

It should be mentioned that a user can specify a escape character, so that content characters is not confused for separators. Example:



val csvFeeder = csv("foo.csv", escapeChar = '\\')

JSON Feeders

There is a flexibility of using data in JSON format instead of CSV. As shown below:


val jsonFileFeeder = jsonFile("foo.json")
val jsonUrlFeeder = jsonUrl("http://me.com/foo.json")

Therefore, for this case consider the following JSON Code:


[
  {
    "id":19434,
    "foo":1
  },
  {
    "id":19435,
    "foo":2
  }
]

The JSON code will be turned into the following code:


record1: Map("id" -> 19434, "foo" -> 1)
record2: Map("id" -> 19435, "foo" -> 2)

It may be kept in mind that the root element has to be an array.

JDBC Feeder

Gatling provides a builtin that reads from a JDBC connection.


// keep in mind to import the JDBC Module
import io.gatling.jdbc.Predef._

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users")

It may be moted that, just like any other parser built-ins, this too returns a instance of RecordSeqFeederBuilder The following things has to be kept in mind:

  • The databaseURL must be a JDBC URL (e.g. jdbc:postgresql:gatling).
  • The credentials to access the database is the user name and the password.
  • The values needed will be retrieved by the sql query.

Sitemap Feeder

Gatling also support a feeder that reads data from a SiteMap file.



// remember to import the http module
import io.gatling.http.Predef._val feeder = sitemap("/path/to/sitemap/file")

The following code as shown below:



<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>
    <loc>http://www.example.com/</loc>
    <lastmod>2005-01-01</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=12&desc=vacation_hawaii</loc>
    <changefreq>weekly</changefreq>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=73&desc=vacation_new_zealand</loc>
    <lastmod>2004-12-23</lastmod>
    <changefreq>weekly</changefreq>
  </url>
</urlset>

The above code will transformed into:


record1: Map(
           "loc" -> "http://www.example.com/",
           "lastmod" -> "2005-01-01",
           "changefreq" -> "monthly",
           "priority" -> "0.8")

record2: Map(
           "loc" -> "http://www.example.com/catalog?item=12&desc=vacation_hawaii",
           "changefreq" -> "weekly")

record3: Map(
           "loc" -> "http://www.example.com/catalog?item=73&desc=vacation_new_zealand",
           "lastmod" -> "2004-12-23",
           "changefreq" -> "weekly")
		   

Redis feeder

Gatling reads data from Redis using the Redis Commands. Some are listed below:

  • SPOP : Remove and return any random element in the list.
  • LPOP : Remove and return the first element in the list.
  • SRANDMETER : Returns a random element from the set.
In default RadisFeeder uses LPOP Command: As follows:

import com.redis._
import io.gatling.redis.feeder.RedisFeeder

val redisPool = new RedisClientPool("localhost", 6379)

// use a list, so there's one single value per record, which is here named "foo"
val feeder = RedisFeeder(redisPool, "foo")

A third parameter may be used to specify desired Redis Command. As shown below:


// read data using SPOP command from a set named "foo"
val feeder = RedisFeeder(clientPool, "foo", RedisFeeder.SPOP)

It is interesting to note that, the Redis version of v2.1.14, support mass insertion of data from a file. That is why it becomes possible, to load thousands of keys within a few seconds and so Gatling reads them directly from memory.

Let us take an example where, a simple Scala function generates a file with one thousand different urls ready to be loaded in a Redis list named URLS.


import java.io.{ File, PrintWriter }
import io.gatling.redis.util.RedisHelper._

def generateOneMillionUrls(): Unit = {
  val writer = new PrintWriter(new File("/tmp/loadtest.txt"))
  try {
    for (i <- 0 to 1000) {
      val url = "test?id=" + i
      // note the list name "URLS" here
      writer.write(generateRedisProtocol("LPUSH", "URLS", url))
    }
  } finally {
    writer.close()
  }
}

It may be noted that, the urls can now be loaded in Redis using the below command:


`cat /tmp/loadtest.txt | redis-cli --pipe`

Converting

It is required sometime that we may convert the raw data, we get from the feeder. Like, a CSV feeder would give us Strings only, but we may need to convert one of the attribute into an integer.

Therefore, try to look the below conversion of an attribute into an integer.


convert(conversion: PartialFunction[(String, T), Any])									

I takes a PartialFunction, which means that we can only define it for the scope we want to convert. And the non matching attributes will be left unchanged whose input is a string. For example:


csv("myFile.csv").convert {
case ("attributeThatShouldBeAnInt", string) => string.toInt
}

Data Problem

There are two types of retrieval that may be required. In one, we may required non-shared, and in the other we may require User dependent Data.

Non-shared Data :

Some time we may want all virtual users to play all the records in a file. But our Feeder may not have this functionality.

Therefore, to solve this problem, we need to add the attribute flattenMapIntoAttributes. As shown in the below example:


val records = csv("foo.csv").records
foreach(records, "record") {
exec(flattenMapIntoAttributes("${record}"))
   }

User Dependent Data :

We may require to filter the injected data based on some information from the session. In this case also, Feeder may not be able to do this, being just an iterator. Therefore, it is unaware of the context. Thats the moment, when we have to write our own injection logic.

To understand better, let us take an example. We have two files and we want to inject data from the second one, based on what has been injected from the first one: In the userProject.csv:


						user, project
						bob, aProject
						sue, bProject
					    

And in projectIssue.csv:


						project,issue
						aProject,1
						aProject,12
						aProject,14
						aProject,15
						aProject,17
						aProject,5
						aProject,7
						bProject,1
						bProject,2
						bProject,6
						bProject,64
					    

This is how we can randomly inject an issue, based on the Project:


						import io.gatling.core.feeder._
import java.util.concurrent.ThreadLocalRandom

// index records by project
val recordsByProject: Map[String, IndexedSeq[Record[String]]] =
  csv("projectIssue.csv").records.groupBy { record => record("project") }

// convert the Map values to get only the issues instead of the full records
val issuesByProject: Map[String, IndexedSeq[String]] =
  recordsByProject.mapValues { records => records.map { record => record("issue") } }

// inject project
feed(csv("userProject.csv"))

  .exec { session =>
    // fetch project from  session
    session("project").validate[String].map { project =>

      // fetch project's issues
      val issues = issuesByProject(project)

      // randomly select an issue
      val selectedIssue = issues(ThreadLocalRandom.current.nextInt(issues.length))

      // inject the issue in the session
      session.set("issue", selectedIssue)
    }
  }
					    

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions

Protractor Training

new tutorial I am starting new Protractor training batch on Weekends. Course starts from 08-june-2019 [10am IST to 1 pm IST] based on online for 6 days, and course fee is INR 10, 000.

Interested people can confirm the seat by calling to +91 8971673487 or whatsapp to +91 9003381224

Find the course content : View Content