cargo-evals-0.2.0 is not a library.
cargo-evals
cargo-evals is the Cargo subcommand for running evals suites for agents.
It can:
- initialize
evals.toml - discover suites generated by
evals::build() - list suites and evals
- run filtered evals
- emit inline terminal output or JSON events
Commands
Minimal Setup
Your crate needs:
evals::build()?inbuild.rsevals::setup!();insrc/lib.rs- suites under
evals/**/*.rs
With that in place, cargo evals will discover and run the generated registry automatically.
External Project Example
build.rs:
src/lib.rs:
setup!;
Generate evals.toml:
The generated file includes:
- a working local Ollama target
- a default timeout and output dir
- commented examples for OpenAI, Anthropic, OpenRouter, Workers AI, and LM Studio
You can also write evals.toml yourself. A minimal version is:
[]
[[]]
= "ollama"
= "llama3.2:3b"
Then: