Skip to main content

Test Scripts

What you'll learn
  • The format of Artillery test scripts
  • Configuration options in Artillery test scripts

Overview

An Artillery test script is a YAML file composed of two main sections: config and scenarios.

The scenarios section contains definitions of VU behavior.

The config section sets runtime configuration for the test such as the URI of the system being tested, load phase configuration, plugins, and protocol-specific settings such as HTTP response timeouts.

config section

The config section usually defines the target (the hostname or IP address of the system under test), the load progression, and protocol-specific settings, such as HTTP response timeouts or Socket.io transport options. It may also be used to load and configure plugins and custom JS code.

target - target service

config.target sets the endpoint of the system under test, such as a hostname, an IP address or a URI.

The format of this field depends on the system you're testing and the environment it runs in. For example, for an HTTP-based application, it's typically the protocol + hostname (e.g. http://myapp.staging.local). For a WebSocket server, it's usually the hostname (and optionally the port) of the server (e.g. ws://127.0.0.1), and so on.

phases - load phases

A load phase defines how Artillery generates new virtual users (VUs) in a specified time period. For example, a typical performance test will have a gentle warm-up phase, followed by a ramp-up phase, and finalizing with a maximum load for a duration of time.

config.phases is an array of phase definitions that Artillery goes through sequentially. Four kinds of phases are supported:

  1. A phase with a duration and a constant arrival rate of a number of new VUs per second
  2. A linear ramp-up phase where the number of new arrivals increases linearly over time
  3. A phase that generates a fixed count of new arrivals over a period of time
  4. A pause phase which generates no new VUs for a duration of time

You can cap the total number of VUs with the maxVusers option for any phase.

info

The duration of an arrival phase determines only how long virtual users will be generated for. It is not the same as the duration of a test run. How long a given test will run for depends on several factors, such as complexity and length of user scenarios, server response time, and network latency.

Load phase examples

Constant arrival rate

The following example generates 50 virtual users every second for 5 minutes:

config:
target: "https://staging.example.com"
phases:
- duration: 300
arrivalRate: 50

The following example generates 10 virtual users every second for 5 minutes, with no more than 50 concurrent virtual users at any given time:

config:
target: "https://staging.example.com"
phases:
- duration: 300
arrivalRate: 10
maxVusers: 50
Ramp up rate

The following example ramps up the arrival rate of virtual users from 10 to 50 over 2 minutes:

config:
target: "https://staging.example.com"
phases:
- duration: 120
arrivalRate: 10
rampTo: 50
Fixed number of arrivals per second

The following example creates 20 virtual users in 60 seconds (one virtual user approximately every 3 seconds):

config:
target: "https://staging.example.com"
phases:
- duration: 60
arrivalCount: 20
A do-nothing pause phase

The following example does not send any virtual users for 60 seconds:

config:
target: "https://staging.example.com"
phases:
- pause: 60

How do ramps work?

Think of the rampTo setting as a shortcut for manually writing out a sequence of arrival phases. For example, let's say you have the following load phase defined:

phases:
- duration: 100
arrivalRate: 1
rampTo: 50

The above load phase is equivalent to the following:

phases:
-
arrivalRate: 1
duration: 2
-
arrivalRate: 2
duration: 2
-
arrivalRate: 3
duration: 2
-
# ... etc ...
-
arrivalRate: 50
duration: 2

Partial arrival rates are rounded up (ie: 1.5 arrivals -> 2 arrivals), this may happen in some scenarios.

environments - config profiles

Typically, you may want to reuse a load testing script across multiple environments with minor tweaks. For instance, you may want to run the same performance tests in development, staging, and production. However, for each environment, you need to set a different target and modify the load phases.

Instead of duplicating your test definition files for each environment, you can use the config.environments setting. It allows you to specify the number of named environments that you can define with environment-specific configuration.

A typical use-case is to define multiple targets with different load phase definitions for each of those systems:

config:
target: "http://service1.acme.corp:3003"
phases:
- duration: 10
arrivalRate: 1
environments:
production:
target: "http://service1.prod.acme.corp:44321"
phases:
- duration: 1200
arrivalRate: 10
local:
target: "http://127.0.0.1:3003"
phases:
- duration: 1200
arrivalRate: 20

When running your performance test, you can specify the environment on the command line using the -e flag. For example, to execute the example test script defined above with the staging configuration:

artillery run -e staging my-script.yml

The $environment variable

When running your tests in a specific environment, you can access the name of the current environment using the $environment variable.

For example, you can print the name of the current environment from a scenario during test execution:

config:
environments:
local:
target: "http://127.0.0.1:3003"
phases:
- duration: 120
arrivalRate: 20
scenarios:
- flow:
- log: "Current environment is set to: {{ $environment }}"

If you run the test with artillery run -e local my-script.yml, Artillery will print "Current environment is set to: local".

plugins - plugin config

This section can be used to configure Artillery plugins. Please see plugins overview for details.

processor - custom JS code

config.processor may be set to a path to a CommonJS module which will be require()d and made available to scenarios.

payload - loading data from CSV files

You can use a CSV file to provide dynamic data to test scripts. For example, you might have a list of usernames and passwords that you want to use to test authentication in your API. Artillery allows you to load, parse and map data in CSV files to variables which can be used inside virtual user scenarios.

tip

The main use-case for loading data from CSV files is for randomizing request payloads. If you require determinism, this feature may not work as expected. An example of determinism is making sure that each row is not used more than once during a test run, or using the data from each row in order.

Artillery supports two ways of providing data from a CSV file to virtual users:

  1. A row at a time, i.e. each VU gets data from just one row

  2. All rows, i.e. each VU has access to all of the data

    For example, you may have a file named users.csv with the following contents:

testuser1,password1
testuser2,password2
testuser3,password3

To access this information in a test definition, you can load the data from the CSV file using config.payload setting:

  config:
payload:
# path is relative to the location of the test script
path: "users.csv"
fields:
- "username"
- "password"
scenarios:
- flow:
- post:
url: "/auth"
json:
username: "{{ username }}"
password: "{{ password }}"

In this example, we tell Artillery to load users.csv file with the path setting and make the variables username and password available in scenarios containing values from one of the rows in the CSV file.

We can also make the entire dataset available to every VU, using loadAll, and loop through it in our scenario:

  config:
payload:
path: "users.csv"
fields:
- "username"
- "password"
loadAll: true
name: auth # refer to the data as "auth"
scenarios:
- flow:
- loop:
- post:
url: "/auth"
json:
username: "{{ $loopElement.username }}"
password: "{{ $loopElement.password }}"
over: auth

It's also possible to import multiple CSV files in a test definition by setting payload as an array:

payload:
-
path: "pets.csv"
fields:
- "species"
- "name"
-
path: "urls.csv"
fields:
- "url"

You can also dynamically load different CSV files depending on the environment you set with the -e flag by using the $environment variable when specifying the path:

payload:
- path: "{{ $environment }}-logins.csv"
fields:
- "username"
- "password"

An example for dynamically loading a payload file is to load a different set of usernames and passwords to use with an authentication endpoint when running the same test in different environments.

Payload file options

  • fields - Names of variables to use for each column in the CSV file
  • order (default: random) - Control how rows are selected from the CSV file for each new virtual user.
    • This option may be set to sequence to iterate through the rows in a sequence (looping around and starting from the beginning after reaching the last row). Note that this will not work as expected when running distributed tests, as each node will have its own copy of the CSV data.
  • skipHeader (default: false) - Set to true to make Artillery skip the first row in the file (typically the header row).
  • delimiter (default: ,) - If the payload file uses a delimiter other than a comma, set this option to the delimiter character.
  • cast (default: true) - By default, Artillery will convert fields to native types (e.g. numbers or booleans). To keep those fields as strings, set this option to false.
  • skipEmptyLines (default: true) - By default, Artillery skips empty lines in the payload. Set to false to include empty lines.
  • loadAll and name - set loadAll to true to provide all rows to each VU, and name to a variable name which will contain the data

Example

The following example loads a payload file called users.csv, skips the first row, and selects each subsequent row sequentially:

config:
payload:
path: "users.csv"
fields:
- "username"
- "password"
order: sequence
skipHeader: true
scenarios:
- # ... the rest of the script

variables - inline variables

Variables can be defined in the config.variables section and used in scenario definitions.

Variables work similarly to loading fields from a payload file. You can define multiple values for a variable and access them randomly in your scenarios. For instance, the following example defined two variables, {{ id }} and {{ postcode }}, with multiple values:

  config:
target: "http://app01.local.dev"
phases:
-
duration: 300
arrivalRate: 25
variables:
postcode:
- "SE1"
- "EC1"
- "E8"
- "WH9"
id:
- "8731"
- "9965"
- "2806"
tip

Variables defined in this block are only available in scenario definitions. They cannot be used to template any values in the config section of your scripts. If you need to dynamically override values in the config section, use environment variables in conjunction with $processEnvironment.

tls - self-signed certificates

This setting may be used to tell Artillery to accept self-signed TLS certificates:

config:
tls:
rejectUnauthorized: false
warning

Accepting self-signed certificates may be a security risk

ensure - SLO checks

Artillery can validate if a metric's value meets a predefined threshold. If it doesn't, it will exit with a non-zero exit code. This is especially useful in CI/CD pipelines for automatic quality checks and as a way to check that SLOs are met.

info

The built-in ensure plugin needs to be enabled in config.plugins for this feature with:

config:
plugins:
ensure: {}

Syntax

config:
ensure:
thresholds:
- "metric.name.one": value1
- "metric.name.two": value2
conditions:
- expression: "metric.name.one <= value1 and metric.name.two > value2"
strict: true|false # defaults to true

Two types of checks may be set:

  • thresholds check that a metric's value is less-than the defined integer value
  • conditions can be used to create advanced checks combining multiple metrics and conditions

Any of the metrics tracked during a test run may be used for setting checks. Both built-in and custom metrics may be used.

warning

Using a non-existent metric name will cause that check to fail

Threshold checks

A threshold check ensures that the aggregate value of a metric is under some threshold value.

config:
ensure:
thresholds:
# p99 of response time must be <250:
- "http.response_time.p99": 250
# p95 of response time must be <100:
- "http.response_time.p95": 100

Advanced conditional checks

More complex checks may be set with conditional expressions:

config:
ensure:
conditions:
# Check that we generated 1000+ requests per second and that p95 is < 250ms
- expression: "http.response_time.p95 < 250 and http.request_rate > 1000"

Setting strict: false on a condition will make that check optional. Failing optional checks do not cause Artillery to exit with a non-zero exit code. Checks are strict by default.

Expression syntax
Numeric arithmeticDescription
x + yAdd
x - ySubtract
x * yMultiply
x / yDivide
x % yModulo
x ^ yPower
ComparisonsDescription
x == yEquals
x < yLess than
x <= yLess than or equal to
x > yGreater than
x >= yGreater than or equal to
Boolean logicDescription
x or yBoolean or
x and yBoolean and
not xBoolean not
x ? y : zIf boolean x, value y, else z
( x )Explicit operator precedence
Built-in functionsDescription
ceil(x)Round floating point up
floor(x)Round floating point down
random()Random floating point from 0.0 to 1.0
round(x)Round floating point

Basic checks v1 only

info

This way of specifying checks is retained for backwards-compatibility with Artillery v1, and should no longer be used for new tests.

You can check that the aggregate response time latency is under a specific threshold. For example, to check that the aggregate p95 latency of a performance test is 200 milliseconds or less, add the following configuration to your script:

config:
ensure:
p95: 200

In this test definition, Artillery will exit with a non-zero exit code if the aggregate p95 is over 200 milliseconds.

You can validate the aggregate latency for min, max, median, p95, and p99.

You can also verify that the error rate of your performance test doesn't exceed a defined percentage. The error rate is the ratio of virtual users that didn't complete their scenarios successfully to the total number of virtual users created during the test. For instance, if your performance test generates 1000 virtual users and 50 didn't complete their scenarios successfully, the error rate for the performance test is 5%.

The following example will make Artillery exit with a non-zero exit code if the total error rate exceeded 1%:

config:
ensure:
maxErrorRate: 1

defaults deprecated in v2

This section sets default headers that will apply to all HTTP requests. This has been deprecated in Artillery v2.

timeout deprecated in v2

Set the number of seconds to wait for the server to start responding (send response headers and start the response body).

This setting has been deprecated. Use config.http.timeout instead.

Using environment varables

Values can be set dynamically via environment variables which are available under $processEnvironment template variable. This functionality helps set different configuration values without modifying the test definition and keeping secrets out of your source code.

For example, to set a default HTTP header for all requests via the SERVICE_API_KEY environment variable, your test definition would look like this:

config:
target: https://service.acme.corp
phases:
- duration: 600
arrivalRate: 10
scenarios:
- flow:
- get:
url: "/"
headers:
x-api-key: "{{ $processEnvironment.SERVICE_API_KEY }}"

You can keep the API key out of the source code and provide it on the fly when executing the test script:

export SERVICE_API_KEY="012345-my-api-key"
artillery run my-test.yaml

scenarios section

The scenarios section contains definitions for one or more scenarios for the virtual users (VUs) that Artillery will create. Each scenario is a series of steps representing a typical sequence of requests or messages sent by a user of an application.

A scenario definition is an object which requires a flow attribute and may contain additional optional attributes:

  • flow (required) - An array of operations that a virtual user performs. For example, you can execute GET and POST requests for an HTTP-based application or emit events for a Socket.IO test.
  • name (optional) - Assign a descriptive name to a scenario, which can be helpful in reporting.
  • weight (optional) - Allows for the probability of a scenario being picked by a new virtual user to be "weighed" relative to other scenarios.

Each Artillery engine used during testing supports additional scenario attributes. Read the documentation to learn what you can do in a scenario for each Artillery engine:

before and after sections

The before and after are optional top level sections that can be used to run an arbitrary scenario once per test definition, before or after the scenarios section has run. Any variable captured during the before execution will be available to all virtual users and to the after scenario. These sections can be useful to set up or tear down test data.

info

When running in distributed mode, before and after hooks will be executed once per worker.

info

The after hook is only available in Artillery v2.

The following example calls an authentication endpoint and captures an auth token before the virtual users arrive. After the scenarios have run, the after section invalidates the token:

  config:
target: "http://app01.local.dev"
phases:
- duration: 300
arrivalRate: 25

before:
flow:
- log: "Get auth token"
- post:
url: "/auth"
json:
username: "myUsername"
password: "myPassword"
capture:
- json: $.id_token
as: token
scenarios:
- flow:
- get:
url: "/data"
headers:
authorization: "Bearer {{ token }}"
after:
flow:
- log: "Invalidate token"
- post:
url: "/logout"
json:
token: "{{ token }}"

Scenario weights

Weights allow you to specify that some scenarios should be picked more often than others. If you have three scenarios with weights 1, 2, and 5, the scenario with the weight of 2 is twice as likely to be picked as the one with a weight of 1, and 2.5 times less likely than the one with a weight of 5. Or in terms of probabilities:

  • scenario 1: 1/8 = 12.5% probability of being picked
  • scenario 2: 2/8 = 25% probability of being picked
  • scenario 3: 5/8 = 62.5% probability of being picked

Scenario weights are optional and set to 1 by default, meaning each scenario has the same probability of getting picked.