agent-line
A batteries-included Rust library for building agent workflows. Sync-only, opinionated, and designed for people getting started with agent patterns.
Define agents, wire them into workflows, and let the runner execute them. Agents communicate through shared context and control flow with outcomes like Continue, Next, Retry, and Done.
Quick Start
use ;
;
Workflows
Agents are registered into a workflow, then wired together with start_at and then. The workflow validates everything at build time.
let wf = builder
.register
.register
.register
.start_at
.then
.then
.build
.unwrap;
let mut runner = new;
let result = runner.run;
Agents can also route dynamically by returning Outcome::Next("agent_name") instead of Outcome::Continue.
Context (Ctx)
Ctx is shared mutable state passed to every agent. It provides a key-value store and an event log.
let mut ctx = new;
// Key-value store
ctx.set;
let val = ctx.get; // Some("Hello world")
ctx.remove;
// Event log
ctx.log;
for entry in ctx.logs
ctx.clear_logs;
// Reset everything
ctx.clear;
Ctx persists across multiple runner.run() calls, so the log and store accumulate across runs.
LLM Integration
Agents that need an LLM hold their own LlmConfig and call LlmConfig::request() to start a chat request. Supports Ollama, OpenAI-compatible APIs (OpenRouter, etc.), and the Anthropic API.
use ;
In main, build a config and inject it into the agent:
let llm = from_env; // reads AGENT_LINE_* env vars
let wf = builder
.register
.build?;
Configuration
LlmConfig::from_env() reads:
| Variable | Default | Description |
|---|---|---|
AGENT_LINE_PROVIDER |
ollama |
LLM provider: ollama, openai, or anthropic |
AGENT_LINE_LLM_URL |
http://localhost:11434 |
LLM API base URL |
AGENT_LINE_MODEL |
llama3.1:8b |
Model name |
AGENT_LINE_NUM_CTX |
4096 |
Ollama context window size (options.num_ctx) |
AGENT_LINE_MAX_TOKENS |
value of AGENT_LINE_NUM_CTX |
OpenAI/Anthropic max_tokens cap on the response |
AGENT_LINE_API_KEY |
(none) | API key (required for remote providers) |
AGENT_LINE_DEBUG |
(unset) | Set to any value to log the resolved config and LLM requests/responses to stderr |
For explicit configuration without environment variables, use LlmConfig::builder() instead.
Provider examples
Ollama (default, no API key needed):
Requests to Ollama send "think": false so thinking-capable models (Qwen 3, etc.) skip the <think>...</think> reasoning block before the response. This is the default for latency reasons; thinking can otherwise add minutes per request. Models without thinking support ignore the field.
OpenRouter:
Anthropic:
Multiple models per workflow
Give each agent its own LlmConfig. A cheap local model handles routine extraction; a stronger remote model handles the harder reasoning step:
use ;
Required LlmConfig fields: provider, base_url, model. Optional: api_key, num_ctx for Ollama requests, and max_tokens for OpenAI-compatible and Anthropic requests. LlmConfig::build() returns an error if a required field is missing.
See examples/multi_model.rs for a small pipeline and examples/incident_investigation/ for a multi-file incident correlation example.
Outcomes
Agents return an Outcome to control what happens next:
| Outcome | Behavior |
|---|---|
Continue |
Follow the default next step set by .then() |
Done |
Workflow complete, return the final state |
Next("name") |
Jump to a specific agent by name |
Retry(hint) |
Re-run the current agent (counted against max_retries) |
Wait(duration) |
Sleep, then re-run the current agent |
Fail(msg) |
Stop the workflow with an error |
Tools
Standalone utility functions for common agent tasks. Import with use agent_line::tools;.
File operations
| Function | Signature | Description |
|---|---|---|
read_file |
(path: &str) -> Result<String, StepError> |
Read file contents |
write_file |
(path: &str, content: &str) -> Result<(), StepError> |
Write to file (creates parent dirs) |
append_file |
(path: &str, content: &str) -> Result<(), StepError> |
Append to file (creates if missing) |
file_exists |
(path: &str) -> bool |
Check if a file exists |
delete_file |
(path: &str) -> Result<(), StepError> |
Delete a file |
create_dir |
(path: &str) -> Result<(), StepError> |
Create directory (and parents) |
list_dir |
(path: &str) -> Result<Vec<String>, StepError> |
List directory entries |
find_files |
(path: &str, pattern: &str) -> Result<Vec<String>, StepError> |
Recursively find files by pattern |
Command execution
| Function | Signature | Description |
|---|---|---|
run_cmd |
(cmd: &str) -> Result<CmdOutput, StepError> |
Run a shell command |
run_cmd_in_dir |
(dir: &str, cmd: &str) -> Result<CmdOutput, StepError> |
Run a shell command in a specific directory |
CmdOutput has success: bool, stdout: String, and stderr: String.
HTTP
| Function | Signature | Description |
|---|---|---|
http_get |
(url: &str) -> Result<String, StepError> |
GET request, returns body as string |
http_post |
(url: &str, body: &str) -> Result<String, StepError> |
POST with string body |
http_post_json |
(url: &str, body: &Value) -> Result<String, StepError> |
POST with JSON body |
Parsing
| Function | Signature | Description |
|---|---|---|
strip_code_fences |
(response: &str) -> String |
Remove markdown code fences from LLM output |
parse_lines |
(response: &str) -> Vec<String> |
Split LLM response into lines, strip numbering/bullets |
extract_json |
(response: &str) -> Result<String, StepError> |
Extract first JSON object or array from text |
Error Handling
StepError has four variants designed around what the caller can do about them:
| Variant | Meaning | Action |
|---|---|---|
Invalid(String) |
Bad input or logic error | Fix the code |
Transient(String) |
Network/rate limit failure | Retry might help |
Failed(String) |
Agent explicitly failed | Handle or propagate |
Other(String) |
Everything else | Inspect the message |
From impls exist for ureq::Error (maps to Transient) and std::io::Error (maps to Other), so you can use ? in tool calls.
Runner Configuration
let mut runner = new
.with_max_steps // default, prevents infinite loops
.with_max_retries; // default, per-agent consecutive retry limit
Hooks
Runner supports closure-based hooks for observability. Closures are FnMut, so you can use stateful callbacks (counters, accumulators, etc.).
let mut runner = new
.on_step
.on_error;
Or use the built-in tracing shorthand, which prints step transitions and errors to stderr:
let mut runner = new.with_tracing;
Output looks like:
[step 1] fetch_weather -> Continue (0.001s)
[step 2] fetch_calendar -> Continue (0.000s)
[step 3] fetch_email -> Continue (0.000s)
[step 4] summarize -> Done (2.340s)
OpenTelemetry (OTEL) integration
You can export each agent step as OTEL spans by wiring hooks to your own tracer:
use ;
use ;
use ;
let mut workflow_span = tracer.start;
let parent = new.with_remote_span_context;
let parent_for_step = parent.clone;
let parent_for_error = parent.clone;
let mut runner = new
.on_step
.on_error;
let _ = runner.run;
workflow_span.end;
Full runnable example:
Why tracing is hook-based
agent-line intentionally does not hardcode an observability backend in the core runner. That design is the most flexible for a library because users can:
- Send events to OTEL,
tracing, metrics, logs, or custom sinks without adapter friction. - Avoid extra global initialization and dependency weight when tracing is not needed.
- Keep runtime behavior predictable in embedded, CLI, service, and test environments.
The built-in with_tracing() helper remains for quick local debugging, while hooks cover production observability needs.
Hook event types
StepEvent is passed to on_step after each successful agent step:
| Field | Type | Description |
|---|---|---|
agent |
&str |
Name of the agent that ran |
outcome |
&Outcome |
The outcome the agent returned |
duration |
Duration |
Wall-clock time for the step |
step_number |
usize |
Sequential step counter (starts at 1) |
retries |
usize |
Consecutive retry count for the current agent |
ErrorEvent is passed to on_error when an agent errors or a limit is exceeded:
| Field | Type | Description |
|---|---|---|
agent |
&str |
Name of the agent that errored |
error |
&StepError |
The error that occurred |
step_number |
usize |
Step number where the error happened |
Examples
| Example | Run | Description |
|---|---|---|
| hello_world | cargo run --example hello_world |
Single agent, no workflow |
| workflow | cargo run --example workflow |
Linear workflow with chained agents |
| edit_loop | cargo run --example edit_loop |
Validate/fix loop with retry |
| newsletter | cargo run --example newsletter |
Multi-phase LLM workflow (needs Ollama) |
| multi_model | cargo run --example multi_model |
Pipeline with different models per agent: cheap step uses local Ollama (qwen3:8b), strong step uses Anthropic (needs ANTHROPIC_API_KEY) |
| incident_investigation | cargo run --example incident_investigation |
Multi-file incident correlation workflow with a fast small Ollama model for triage and a heavier Ollama model for the report. main.rs shows commented-out OpenRouter and Anthropic alternatives |
| coder | cargo run --example coder |
Code generation with test loop (needs Ollama) |
| assistant | cargo run --example assistant |
Personal assistant pipeline with tracing (needs Ollama) |
| otel_tracing | cargo run --example otel_tracing |
OTEL span export from on_step/on_error hooks |
| parallel | cargo run --example parallel |
Threaded fan-out/fan-in with researcher/writer/editor pipeline |
TODO
- Rename
find_filestoglobor add proper glob pattern support - Better LLM error output. Today a non-2xx response surfaces as
transient: llm request failed: http status: 404with no body. Read the response body and surface the underlying message (e.g. Ollama's "model X not found") so users can act on it. - Expose Ollama thinking mode as an opt-in. The library currently hardcodes
"think": falsefor the Ollama provider so thinking models (Qwen 3, etc.) skip the<think>block by default. Add a way to re-enable it (likely a method onLlmConfigBuilder) for users who want the quality bump on hard reasoning tasks and can wait. - Switch
tools::file::*andtools::command::run_cmd_in_dirpath parameters from&strtoimpl AsRef<Path>to match the Rust stdlib convention (std::fs::read_to_string, etc.). Source-compatible for&strcallers;PathBufcallers stop having to round-trip throughString. Separately consider whetherlist_dir/find_filesshould returnVec<PathBuf>instead ofVec<String>(breaking).
Dependencies
- ureq - Sync HTTP client
- serde + serde_json - JSON serialization