# cargo-evals
`cargo-evals` is the Cargo subcommand for running `evals` suites for `agents`.
It can:
- initialize `evals.toml`
- discover suites generated by `evals::build()`
- list suites and evals
- run filtered evals
- emit inline terminal output or JSON events
## Commands
```bash
cargo evals init
cargo evals list
cargo evals models
cargo evals run
cargo evals preserves
```
## Minimal Setup
Your crate needs:
- `evals::build()?` in `build.rs`
- `evals::setup!();` in `src/lib.rs`
- suites under `evals/**/*.rs`
With that in place, `cargo evals` will discover and run the generated registry automatically.
## External Project Example
`build.rs`:
```rust
fn main() -> anyhow::Result<()> {
evals::build()?;
Ok(())
}
```
`src/lib.rs`:
```rust
evals::setup!();
```
Generate `evals.toml`:
```bash
cargo evals init
```
The generated file includes:
- a working local Ollama target
- a default timeout and output dir
- commented examples for OpenAI, Anthropic, OpenRouter, Workers AI, and LM Studio
You can also write `evals.toml` yourself. A minimal version is:
```toml
[evals]
[[evals.targets]]
provider = "ollama"
model = "llama3.2:3b"
```
Then:
```bash
cargo build
cargo evals list
cargo evals run
```