Windsock - A DB benchmarking framework
Windsock is suitable for:
- Iteratively testing performance during development of a database or service (use a different tool for microbenchmarks)
- Investigating performance of different workloads on a database you intend to use.
What you do:
- Bring your own rust async compatible DB driver
- Define your benchmark logic which reports some simple stats back to windsock
- Define your pool of benchmarks
What windsock does:
- Provides a CLI from which you can:
- Query available benchmarks
- Run benchmarks matching specific tags.
- windsock can automatically or manually setup and cleanup required cloud resources
- Process benchmark results into readable tables
- Baselines can be set and then compared against
Add windsock benches to your project
1
Import windsock and setup a cargo bench for windsock:
[]
= "0.1"
[[]]
= "windsock"
= false
All windsock benchmarks should go into this one bench.
2
Setup a shortcut to run windsock in .cargo/config.toml:
[]
= "test --release --bench windsock --"
= "test --bench windsock --"
This allows us to run cargo windsock instead of cargo --test --release --bench windsock --.
3
Then at benches/windsock create a benchmark like this (simplified):
// This struct is cloned once for each tokio task it will be run in.
This example is simplified for demonstration purposes, refer to windsock/benches/windsock in this repo for a full working example.
How to perform various tasks in cargo windsock CLI
Just run every bench
> cargo windsock run-local
Run benches with matching tags and view all the results in one table
> cargo windsock run-local db=kafka OPS=1000 topology=single # run benchmarks matching some tags
> cargo windsock results # view the results of the benchmarks with the same tags in a single table
Iteratively compare results against a previous implementation
> git checkout main # checkout original implementation
> cargo windsock run-local # run all benchmarks
> cargo windsock baseline-set # set the last benchmark run as the baseline
> vim src/main.rs # modify implementation
> cargo windsock run-local # run all benchmarks, every result is compared against the baseline
> cargo windsock results # view those results in a nice table
> vim src/main.rs # modify implementation again
> cargo windsock run-local # run all benchmarks, every result is compared against the baseline
Run benchmarks in the cloud (simple)
# create cloud resources, run benchmarks and then cleanup - all in one command
> cargo windsock cloud-setup-run-cleanup
Iteratively compare results against a previous implementation (running in a remote cloud)
# Setup the cloud resources and then form a baseline
> git checkout main # checkout original implementation
> cargo windsock cloud-setup db=kafka # setup the cloud resources required to run all kafka benchmarks
> cargo windsock cloud-run db=kafka # run all the kafka benchmarks in the cloud
> cargo windsock baseline-set # set the last benchmark run as the baseline
# Make a change and and measure the effect
> vim src/main.rs # modify implementation
> cargo windsock cloud-run db=kafka # run all benchmarks, every result is compared against the baseline
> cargo windsock results # view those results in a nice table, compared against the baseline
# And again
> vim src/main.rs # modify implementation again
> cargo windsock cloud-run db=kafka # run all benchmarks, every result is compared against the baseline
# And finally...
> cargo windsock cloud-cleanup # Terminate all the cloud resources now that we are done
Generate graph webpage
TODO: planned, but not implemented
> cargo windsock local-run # run all benches
> cargo windsock generate-webpage # generate a webpage from the results