[][src]Crate goose


Have you ever been attacked by a goose?

Goose is a load testing tool inspired by Locust. User behavior is defined with standard Rust code.

Goose load tests, called Goose Attacks, are built by creating an application with Cargo, and declaring a dependency on the Goose library.

Goose uses reqwest to provide a convenient HTTP client.

Creating and running a Goose load test

Creating a simple Goose load test

First create a new empty cargo application, for example:

$ cargo new loadtest
     Created binary (application) `loadtest` package
$ cd loadtest/

Add Goose as a dependency in Cargo.toml:

goose = "0.9"

Add the following boilerplate use declaration at the top of your src/main.rs:

use goose::prelude::*;

Using the above prelude will automatically add the following use statements necessary for your load test, so you don't need to manually add them:

use goose::{GooseAttack, task, taskset};
use goose::goose::{GooseTaskSet, GooseUser, GooseTask};

Below your main function (which currently is the default Hello, world!), add one or more load test functions. The names of these functions are arbitrary, but it is recommended you use self-documenting names. Load test functions must be async. Each load test function must accept a GooseUser pointer. For example:

use goose::prelude::*;

async fn loadtest_foo(user: &GooseUser) -> GooseTaskResult {
  let _goose = user.get("/path/to/foo").await?;


In the above example, we're using the GooseUser helper method get to load a path on the website we are load testing. This helper creates a Reqwest request builder, and uses it to build and execute a request for the above path. If you want access to the request builder object, you can instead use the goose_get helper, for example to set a timout on this specific request:

use std::time;

use goose::prelude::*;

async fn loadtest_bar(user: &GooseUser) -> GooseTaskResult {
    let request_builder = user.goose_get("/path/to/bar").await?;
    let _goose = user.goose_send(request_builder.timeout(time::Duration::from_secs(3)), None).await?;


We pass the request_builder object to goose_send which builds and executes it, also collecting useful statistics. The .await at the end is necessary as goose_send is an async function.

Once all our tasks are created, we edit the main function to initialize goose and register the tasks. In this very simple example we only have two tasks to register, while in a real load test you can have any number of task sets with any number of individual tasks.

use goose::prelude::*;

fn main() -> Result<(), GooseError> {
    let _goose_stats = GooseAttack::initialize()?
            .set_wait_time(0, 3)?
            // Register the foo task, assigning it a weight of 10.
            // Register the bar task, assigning it a weight of 2 (so it
            // runs 1/5 as often as bar). Apply a task name which shows up
            // in statistics.
        // You could also set a default host here, for example:


async fn loadtest_foo(user: &GooseUser) -> GooseTaskResult {
    let _goose = user.get("/path/to/foo").await?;


async fn loadtest_bar(user: &GooseUser) -> GooseTaskResult {
    let _goose = user.get("/path/to/bar").await?;


Goose now spins up a configurable number of users, each simulating a user on your website. Thanks to Reqwest, each user maintains its own web client state, handling cookies and more so your "users" can log in, fill out forms, and more, as real users on your sites would do.

Running the Goose load test

Attempts to run our example will result in an error, as we have not yet defined the host against which this load test should be run. We intentionally do not hard code the host in the individual tasks, as this allows us to run the test against different environments, such as local and staging.

$ cargo run --release
   Compiling loadtest v0.1.0 (~/loadtest)
    Finished release [optimized] target(s) in 1.52s
     Running `target/release/loadtest`
05:33:06 [ERROR] Host must be defined globally or per-TaskSet. No host defined for LoadtestTasks.

Pass in the -h flag to see all available run-time options. For now, we'll use a few options to customize our load test.

$ cargo run --release -- --host http://dev.local -t 30s -v

The first option we specified is --host, and in this case tells Goose to run the load test against an 8-core VM on my local network. The -t 30s option tells Goose to end the load test after 30 seconds (for real load tests you'll certainly want to run it longer, you can use m to specify minutes and h to specify hours. For example, -t 1h30m would run the load test for 1 hour 30 minutes). Finally, the -v flag tells goose to display INFO and higher level logs to stdout, giving more insight into what is happening. (Additional -v flags will result in considerably more debug output, and are not recommended for running actual load tests; they're only useful if you're trying to debug Goose itself.)

Running the test results in the following output (broken up to explain it as it goes):

   Finished release [optimized] target(s) in 0.05s
    Running `target/release/loadtest --host 'http://dev.local' -t 30s -v`
05:56:30 [ INFO] Output verbosity level: INFO
05:56:30 [ INFO] Logfile verbosity level: INFO
05:56:30 [ INFO] Writing to log file: goose.log

By default Goose will write a log file with INFO and higher level logs into the same directory as you run the test from.

05:56:30 [ INFO] run_time = 30
05:56:30 [ INFO] concurrent users defaulted to 8 (number of CPUs)

Goose will default to launching 1 user per available CPU core, and will launch them all in one second. You can change how many users are launched with the -u option, and you can change how many users are launched per second with the -r option. For example, -u 30 -r 2 would launch 30 users over 15 seconds, or two users per second.

05:56:30 [ INFO] global host configured: http://dev.local
05:56:30 [ INFO] launching user 1 from LoadtestTasks...
05:56:30 [ INFO] launching user 2 from LoadtestTasks...
05:56:30 [ INFO] launching user 3 from LoadtestTasks...
05:56:30 [ INFO] launching user 4 from LoadtestTasks...
05:56:30 [ INFO] launching user 5 from LoadtestTasks...
05:56:30 [ INFO] launching user 6 from LoadtestTasks...
05:56:30 [ INFO] launching user 7 from LoadtestTasks...
05:56:31 [ INFO] launching user 8 from LoadtestTasks...
05:56:31 [ INFO] launched 8 users...

Each user is launched in its own thread with its own user state. Goose is able to make very efficient use of server resources.

05:56:46 [ INFO] printing running statistics after 15 seconds...
 Name                    | # reqs         | # fails        | req/s  | fail/s
 GET /path/to/foo        | 15,795         | 0 (0%)         | 1,053  | 0    
 GET bar                 | 3,161          | 0 (0%)         | 210    | 0    
 Aggregated              | 18,956         | 0 (0%)         | 1,263  | 0    

When printing statistics, by default Goose will display running values approximately every 15 seconds. Running statistics are broken into two tables. The first, above, shows how many requests have been made, how many of them failed (non-2xx response), and the corresponding per-second rates.

Note that Goose respected the per-task weights we set, and foo (with a weight of 10) is being loaded five times as often as bar (with a weight of 2). Also notice that because we didn't name the foo task by default we see the URL loaded in the statistics, whereas we did name the bar task so we see the name in the statistics.

 Name                    | Avg (ms)   | Min        | Max        | Mean      
 GET /path/to/foo        | 67         | 31         | 1351       | 53      
 GET bar                 | 60         | 33         | 1342       | 53      
 Aggregated              | 66         | 31         | 1351       | 56      

The second table in running statistics provides details on response times. In our example (which is running over wifi from my development laptop), on average each page is returning within 66 milliseconds. The quickest page response was for foo in 31 milliseconds. The slowest page response was also for foo in 1351 milliseconds.

05:37:10 [ INFO] stopping after 30 seconds...
05:37:10 [ INFO] waiting for users to exit

Our example only runs for 30 seconds, so we only see running statistics once. When the test completes, we get more detail in the final summary. The first two tables are the same as what we saw earlier, however now they include all statistics for the entire load test:

 Name                    | # reqs         | # fails        | req/s  | fail/s
 GET bar                 | 6,050          | 0 (0%)         | 201    | 0    
 GET /path/to/foo        | 30,257         | 0 (0%)         | 1,008  | 0    
 Aggregated              | 36,307         | 0 (0%)         | 1,210  | 0    
 Name                    | Avg (ms)   | Min        | Max        | Mean      
 GET bar                 | 66         | 32         | 1388       | 53      
 GET /path/to/foo        | 68         | 31         | 1395       | 53      
 Aggregated              | 67         | 31         | 1395       | 50      

The ratio between foo and bar remained 5:2 as expected. As the test ran, however, we saw some slower page loads, with the slowest again foo this time at 1395 milliseconds.

Slowest page load within specified percentile of requests (in ms):
Name                    | 50%    | 75%    | 98%    | 99%    | 99.9%  | 99.99%
GET bar                 | 53     | 66     | 217   | 537     | 1872   | 12316
GET /path/to/foo        | 53     | 66     | 265   | 1060    | 1800   | 10732
Aggregated              | 53     | 66     | 237   | 645     | 1832   | 10818

A new table shows additional information, breaking down response-time by percentile. This shows that the slowest page loads only happened in the slowest .001% of page loads, so were very much an edge case. 99.9% of the time page loads happened in less than 2 seconds.


Copyright 2020 Jeremy Andrews

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at


Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.



Helpers and objects for building Goose load tests.




task!(foo) expands to GooseTask::new(foo), but also does some boxing to work around a limitation in the compiler.


taskset!("foo") expands to GooseTaskSet::new("foo").



Internal global state for load test.


CLI options available when launching a Goose load test.


Socket used for coordinating a Gaggle, a distributed load test.



Definition of all errors a GooseAttack can return.



Worker ID to aid in tracing logs when running a Gaggle.