Crate goose[−][src]
Expand description
Goose
Have you ever been attacked by a goose?
Goose is a load testing framework inspired by Locust. User behavior is defined with standard Rust code.
Goose load tests, called Goose Attacks, are built by creating an application with Cargo, and declaring a dependency on the Goose library.
Goose uses reqwest
to provide a convenient HTTP
client.
Documentation
Creating and running a Goose load test
Creating a simple Goose load test
First create a new empty cargo application, for example:
$ cargo new loadtest Created binary (application) `loadtest` package $ cd loadtest/
Add Goose as a dependency in Cargo.toml
:
[dependencies] goose = "0.13"
Add the following boilerplate use
declaration at the top of your src/main.rs
:
use goose::prelude::*;
Using the above prelude will automatically add the following use
statements
necessary for your load test, so you don’t need to manually add them:
pub use goose::config::{GooseDefault, GooseDefaultType}; pub use goose::goose::{ GooseTask, GooseTaskError, GooseTaskFunction, GooseTaskResult, GooseTaskSet, GooseUser, }; pub use goose::metrics::{GooseCoordinatedOmissionMitigation, GooseMetrics}; pub use goose::{task, taskset, GooseAttack, GooseError, GooseScheduler};
Below your main
function (which currently is the default Hello, world!
), add
one or more load test functions. The names of these functions are arbitrary, but it is
recommended you use self-documenting names. Load test functions must be async. Each load
test function must accept a reference to a GooseUser
object
and return a GooseTaskResult
. For example:
use goose::prelude::*; async fn loadtest_foo(user: &GooseUser) -> GooseTaskResult { let _goose = user.get("/path/to/foo").await?; Ok(()) }
In the above example, we’re using the GooseUser
helper
get
to load a path on the website we are load
testing. This helper creates a
reqwest::RequestBuilder
object and uses it to build and execute a request for the above path. If you want access
to the RequestBuilder
object, you can instead use the goose_get
helper, for example to set a timeout on this specific request:
use std::time; use goose::prelude::*; async fn loadtest_bar(user: &GooseUser) -> GooseTaskResult { let request_builder = user.goose_get("/path/to/bar").await?; let _goose = user.goose_send(request_builder.timeout(time::Duration::from_secs(3)), None).await?; Ok(()) }
We pass the RequestBuilder
object to goose_send
which builds and
executes it, also collecting useful metrics. The
.await
at the end is necessary as
goose_send
is an async function.
Once all our tasks are created, we edit the main function to initialize goose and register the tasks. In this very simple example we only have two tasks to register, while in a real load test you can have any number of task sets with any number of individual tasks.
use goose::prelude::*; fn main() -> Result<(), GooseError> { let _goose_metrics = GooseAttack::initialize()? .register_taskset(taskset!("LoadtestTasks") // Register the foo task, assigning it a weight of 10. .register_task(task!(loadtest_foo).set_weight(10)?) // Register the bar task, assigning it a weight of 2 (so it // runs 1/5 as often as bar). Apply a task name which shows up // in metrics. .register_task(task!(loadtest_bar).set_name("bar").set_weight(2)?) ) // You could also set a default host here, for example: .set_default(GooseDefault::Host, "http://dev.local/")? // We set a default run time so this test runs to completion. .set_default(GooseDefault::RunTime, 1)? .execute()?; Ok(()) } // A task function that loads `/path/to/foo`. async fn loadtest_foo(user: &GooseUser) -> GooseTaskResult { let _goose = user.get("/path/to/foo").await?; Ok(()) } // A task function that loads `/path/to/bar`. async fn loadtest_bar(user: &GooseUser) -> GooseTaskResult { let _goose = user.get("/path/to/bar").await?; Ok(()) }
Goose now spins up a configurable number of users, each simulating a user on your
website. Thanks to reqwest
, each user maintains its own
web client state, handling cookies and more so your “users” can log in, fill out forms,
and more, as real users on your sites would do.
Running the Goose load test
Attempts to run our example will result in an error, as we have not yet defined the host against which this load test should be run. We intentionally do not hard code the host in the individual tasks, as this allows us to run the test against different environments, such as local development, staging, and production.
$ cargo run --release Compiling loadtest v0.1.0 (~/loadtest) Finished release [optimized] target(s) in 1.52s Running `target/release/loadtest` Error: InvalidOption { option: "--host", value: "", detail: "A host must be defined via the --host option, the GooseAttack.set_default() function, or the GooseTaskSet.set_host() function (no host defined for WebsiteUser)." }
Pass in the -h
flag to see all available run-time options. For now, we’ll use a few
options to customize our load test.
$ cargo run --release -- --host http://dev.local -t 30s -v
The first option we specified is --host
, and in this case tells Goose to run the load test
against a VM on my local network. The -t 30s
option tells Goose to end the load test after 30
seconds (for real load tests you’ll certainly want to run it longer, you can use h
, m
, and
s
to specify hours, minutes and seconds respectively. For example, -t1h30m
would run the
load test for 1 hour 30 minutes). Finally, the -v
flag tells goose to display INFO and higher
level logs to stdout, giving more insight into what is happening. (Additional -v
flags will
result in considerably more debug output, and are not recommended for running actual load tests;
they’re only useful if you’re trying to debug Goose itself.)
Running the test results in the following output (broken up to explain it as it goes):
Finished release [optimized] target(s) in 0.05s Running `target/release/loadtest --host 'http://dev.local' -t 30s -v` 15:42:23 [ INFO] Output verbosity level: INFO 15:42:23 [ INFO] Logfile verbosity level: WARN
If we set the --log-file
flag, Goose will write a log file with WARN and higher level logs
as you run the test from (add a -g
flag to log all INFO and higher level logs).
15:42:23 [ INFO] concurrent users defaulted to 8 (number of CPUs) 15:42:23 [ INFO] run_time = 30 15:42:23 [ INFO] hatch_rate = 1
Goose will default to launching 1 user per available CPU core, and will launch them all in
one second. You can change how many users are launched with the -u
option, and you can
change how many users are launched per second with the -r
option. For example, -u30 -r2
would launch 30 users over 15 seconds (two users per second).
15:42:23 [ INFO] global host configured: http://dev.local/ 15:42:23 [ INFO] initializing user states... 15:42:23 [ INFO] launching user 1 from LoadtestTasks... 15:42:24 [ INFO] launching user 2 from LoadtestTasks... 15:42:25 [ INFO] launching user 3 from LoadtestTasks... 15:42:26 [ INFO] launching user 4 from LoadtestTasks... 15:42:27 [ INFO] launching user 5 from LoadtestTasks... 15:42:28 [ INFO] launching user 6 from LoadtestTasks... 15:42:29 [ INFO] launching user 7 from LoadtestTasks... 15:42:30 [ INFO] launching user 8 from LoadtestTasks... 15:42:31 [ INFO] launched 8 users... 15:42:31 [ INFO] printing running metrics after 8 seconds...
Each user is launched in its own thread with its own user state. Goose is able to make very efficient use of server resources. By default Goose resets the metrics after all users are launched, but first it outputs the metrics collected while ramping up:
15:42:31 [ INFO] printing running metrics after 8 seconds... === PER TASK METRICS === ------------------------------------------------------------------------------ Name | # times run | # fails | task/s | fail/s ------------------------------------------------------------------------------ 1: LoadtestTasks | 1: | 2,033 | 0 (0%) | 254.12 | 0.00 2: bar | 407 | 0 (0%) | 50.88 | 0.00 -------------------------+---------------+----------------+----------+-------- Aggregated | 2,440 | 0 (0%) | 305.00 | 0.00 ------------------------------------------------------------------------------ Name | Avg (ms) | Min | Max | Median ------------------------------------------------------------------------------ 1: LoadtestTasks | 1: | 14.23 | 6 | 32 | 14 2: bar | 14.13 | 6 | 30 | 14 -------------------------+-------------+------------+-------------+----------- Aggregated | 14.21 | 6 | 32 | 14 === PER REQUEST METRICS === ------------------------------------------------------------------------------ Name | # reqs | # fails | req/s | fail/s ------------------------------------------------------------------------------ GET / | 2,033 | 0 (0%) | 254.12 | 0.00 GET bar | 407 | 0 (0%) | 50.88 | 0.00 -------------------------+---------------+----------------+----------+-------- Aggregated | 2,440 | 0 (0%) | 305.00 | 0.00 ------------------------------------------------------------------------------ Name | Avg (ms) | Min | Max | Median ------------------------------------------------------------------------------ GET / | 14.18 | 6 | 32 | 14 GET bar | 14.08 | 6 | 30 | 14 -------------------------+-------------+------------+-------------+----------- Aggregated | 14.16 | 6 | 32 | 14 All 8 users hatched, resetting metrics (disable with --no-reset-metrics).
Goose can optionally display running metrics if started with --running-metrics INT
where INT is an integer value in seconds. For example, if Goose is started with
--running-metrics 15
it will display running values approximately every 15 seconds.
Running metrics are broken into several tables. First are the per-task metrics which
are further split into two sections. The first section shows how many requests have
been made, how many of them failed (non-2xx response), and the corresponding per-second
rates.
This table shows details for all Task Sets and all Tasks defined by your load test, regardless of if they actually run. This can be useful to ensure that you have set up weighting as intended, and that you are simulating enough users. As our first task wasn’t named, it just showed up as “1:”. Our second task was named, so it shows up as the name we gave it, “bar”.
15:42:46 [ INFO] printing running metrics after 15 seconds... === PER TASK METRICS === ------------------------------------------------------------------------------ Name | # times run | # fails | task/s | fail/s ------------------------------------------------------------------------------ 1: LoadtestTasks | 1: | 4,618 | 0 (0%) | 307.87 | 0.00 2: bar | 924 | 0 (0%) | 61.60 | 0.00 -------------------------+---------------+----------------+----------+-------- Aggregated | 5,542 | 0 (0%) | 369.47 | 0.00 ------------------------------------------------------------------------------ Name | Avg (ms) | Min | Max | Median ------------------------------------------------------------------------------ 1: LoadtestTasks | 1: | 21.17 | 8 | 151 | 19 2: bar | 21.62 | 9 | 156 | 19 -------------------------+-------------+------------+-------------+----------- Aggregated | 21.24 | 8 | 156 | 19
The second table breaks down the same metrics by request instead of by Task. For
our simple load test, each Task only makes a single request, so the metrics are
the same. There are two main differences. First, metrics are listed by request
type and path or name. The first request shows up as GET /path/to/foo
as the
request was not named. The second request shows up as GET bar
as the request
was named. The times to complete each are slightly smaller as this is only the
time to make the request, not the time for Goose to execute the entire task.
=== PER REQUEST METRICS === ------------------------------------------------------------------------------ Name | # reqs | # fails | req/s | fail/s ------------------------------------------------------------------------------ GET /path/to/foo | 4,618 | 0 (0%) | 307.87 | 0.00 GET bar | 924 | 0 (0%) | 61.60 | 0.00 -------------------------+---------------+----------------+----------+-------- Aggregated | 5,542 | 0 (0%) | 369.47 | 0.00 ------------------------------------------------------------------------------ Name | Avg (ms) | Min | Max | Median ------------------------------------------------------------------------------ GET /path/to/foo | 21.13 | 8 | 151 | 19 GET bar | 21.58 | 9 | 156 | 19 -------------------------+-------------+------------+-------------+----------- Aggregated | 21.20 | 8 | 156 | 19
Note that Goose respected the per-task weights we set, and foo
(with a weight of 10)
is being loaded five times as often as bar
(with a weight of 2). On average
each page is returning within 21.2
milliseconds. The quickest page response was
for foo
in 8
milliseconds. The slowest page response was for bar
in 156
milliseconds.
15:43:02 [ INFO] stopping after 30 seconds... 15:43:02 [ INFO] waiting for users to exit 15:43:02 [ INFO] exiting user 3 from LoadtestTasks... 15:43:02 [ INFO] exiting user 4 from LoadtestTasks... 15:43:02 [ INFO] exiting user 5 from LoadtestTasks... 15:43:02 [ INFO] exiting user 8 from LoadtestTasks... 15:43:02 [ INFO] exiting user 2 from LoadtestTasks... 15:43:02 [ INFO] exiting user 7 from LoadtestTasks... 15:43:02 [ INFO] exiting user 6 from LoadtestTasks... 15:43:02 [ INFO] exiting user 1 from LoadtestTasks... 15:43:02 [ INFO] printing metrics after 30 seconds...
Our example only runs for 30 seconds, so we only see running metrics once. When the test completes, we get more detail in the final summary. The first two tables are the same as what we saw earlier, however now they include all metrics for the entire length of the load test:
=== PER TASK METRICS === ------------------------------------------------------------------------------ Name | # times run | # fails | task/s | fail/s ------------------------------------------------------------------------------ 1: LoadtestTasks | 1: | 9,974 | 0 (0%) | 332.47 | 0.00 2: bar | 1,995 | 0 (0%) | 66.50 | 0.00 -------------------------+---------------+----------------+----------+-------- Aggregated | 11,969 | 0 (0%) | 398.97 | 0.00 ------------------------------------------------------------------------------ Name | Avg (ms) | Min | Max | Median ------------------------------------------------------------------------------ 1: LoadtestTasks | 1: | 19.65 | 8 | 151 | 18 2: bar | 19.92 | 9 | 156 | 18 -------------------------+-------------+------------+-------------+----------- Aggregated | 19.69 | 8 | 156 | 18 === PER REQUEST METRICS === ------------------------------------------------------------------------------ Name | # reqs | # fails | req/s | fail/s ------------------------------------------------------------------------------ GET / | 9,974 | 0 (0%) | 332.47 | 0.00 GET bar | 1,995 | 0 (0%) | 66.50 | 0.00 -------------------------+---------------+----------------+----------+-------- Aggregated | 11,969 | 0 (0%) | 398.97 | 0.00 ------------------------------------------------------------------------------ Name | Avg (ms) | Min | Max | Median ------------------------------------------------------------------------------ GET / | 19.61 | 8 | 151 | 18 GET bar | 19.88 | 9 | 156 | 18 -------------------------+-------------+------------+-------------+----------- Aggregated | 19.66 | 8 | 156 | 18 ------------------------------------------------------------------------------
The ratio between foo
and bar
remained 5:2 as expected.
------------------------------------------------------------------------------ Slowest page load within specified percentile of requests (in ms): ------------------------------------------------------------------------------ Name | 50% | 75% | 98% | 99% | 99.9% | 99.99% ------------------------------------------------------------------------------ GET / | 18 | 21 | 29 | 79 | 140 | 140 GET bar | 18 | 21 | 29 | 120 | 150 | 150 -------------------------+--------+--------+--------+--------+--------+------- Aggregated | 18 | 21 | 29 | 84 | 140 | 156
A new table shows additional information, breaking down response-time by percentile. This shows that the slowest page loads only happened in the slowest 1% of page loads, so were an edge case. 98% of the time page loads happened in 29 milliseconds or less.
------------------------------------------------------------------------------ Users: 2 Target host: http://dev.local/ During: 2021-08-12 15:42:22 - 2021-08-12 15:43:02 (duration: 00:30:00) goose v0.13.1 ------------------------------------------------------------------------------
And the final table shows an overview of the load test configuration and duration.
License
Copyright 2020-21 Jeremy Andrews
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Modules
Functions and structures related to configuring a Goose load test.
Optional telnet and WebSocket Controller threads.
Helpers and objects for building Goose load tests.
An optional thread for writing logs.
Optional metrics collected and aggregated during load tests.
A list of things that typically must be imported to write a Goose load test.
Utility functions used by Goose, and available when writing load tests.
Macros
task!(foo)
expands to GooseTask::new(foo)
, but also does some boxing to work around a limitation in the compiler.
taskset!("foo")
expands to GooseTaskSet::new("foo")
.
Structs
Global internal state for the load test.
Enums
A GooseAttack
load test operates in one (and only one)
of the following modes.
A GooseAttack
load test moves through each of the following
phases during a complete load test.
An enumeration of all errors a GooseAttack
can return.
Used to define the order GooseTaskSet
s and
GooseTask
s are allocated.
Functions
Returns the unique identifier of the running Worker when running in Gaggle mode.