[!TIP] If Appam is useful, ⭐ Star the repo. It materially helps the project reach more systems thus better and reliable agents for all of us.
Appam is for agent systems that need more than a toy chat loop. It is designed for workloads where the hard parts are operational: multi-turn tool use, session continuation, event streaming, traceability, provider switching, and reliability under repeated runs.
The name
appamis derived from the malayalam saying "അപ്പം പോലെ ചുടാം", which roughly means "as easy as baking an appam."
Why Appam
- Rust-first agent construction with
Agent::quick(...),Agent::new(...), andAgentBuilder - Typed tool system using the
#[tool]macro, directToolimplementations, orClosureTool - Streaming by default through
StreamBuilder,StreamConsumer, and built-in consumers - Durable sessions with SQLite-backed
SessionHistoryandcontinue_session(...) - Traceable runs through built-in JSONL traces and structured stream events
- Provider portability across Anthropic, OpenAI, OpenAI Codex, OpenRouter, Vertex, Azure, and Bedrock
- Production controls for retries, continuation mechanics, reasoning, caching, rate limiting, and provider-specific tuning
What Appam Is Good At
Appam fits best when you want to build agents like:
- Coding agents that read files, write files, and run commands
- Research or operations agents that loop through tools over many turns
- Services that need streaming output plus session persistence
- Internal automation where runs must be inspectable after the fact
- Systems that may need to switch providers without rewriting the agent runtime
If your agent is mostly "prompt in, string out", Appam still works, but its real value shows up once orchestration, tools, and observability matter.
Installation
Add the crate and Tokio:
If you plan to define typed tool inputs, serde is useful too:
Or add dependencies manually:
[]
= "0.1"
= { = "1", = ["macros", "rt-multi-thread"] }
= { = "1", = ["derive"] }
Provider Setup
Set credentials for the provider you want to use:
| Provider | Minimum setup |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| OpenAI Codex | OPENAI_CODEX_ACCESS_TOKEN or a cached login in ~/.appam/auth.json |
| OpenRouter | OPENROUTER_API_KEY |
| Vertex | GOOGLE_VERTEX_API_KEY, GOOGLE_API_KEY, GEMINI_API_KEY, or GOOGLE_VERTEX_ACCESS_TOKEN |
| Azure OpenAI | AZURE_OPENAI_API_KEY and AZURE_OPENAI_RESOURCE |
| Azure Anthropic | AZURE_API_KEY or AZURE_ANTHROPIC_API_KEY, plus AZURE_ANTHROPIC_BASE_URL or AZURE_ANTHROPIC_RESOURCE |
| AWS Bedrock | AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, or AWS_BEARER_TOKEN_BEDROCK |
Common model override variables:
ANTHROPIC_MODELOPENAI_MODELOPENAI_CODEX_MODELOPENROUTER_MODELGOOGLE_VERTEX_MODELAZURE_OPENAI_MODELAZURE_ANTHROPIC_MODELAWS_BEDROCK_MODEL_ID
Quickstart
The smallest useful Appam program is a Rust agent with streaming output:
use *;
async
Agent::quick(...) is the fast path:
- infers the provider from the model string
- creates a
RuntimeAgent - applies sensible defaults for temperature, max tokens, top-p, and retries
Examples of model strings Appam recognizes:
| Model string | Provider |
|---|---|
anthropic/claude-sonnet-4-5 |
Anthropic |
openai/gpt-5.4 |
OpenAI |
openai-codex/gpt-5.4 |
OpenAI Codex |
openrouter/anthropic/claude-sonnet-4-5 |
OpenRouter Responses |
vertex/gemini-2.5-flash |
Vertex |
gemini-2.5-pro |
Vertex |
Tool-Using Agents
Appam's recommended tool path is native Rust.
The #[tool] macro turns normal Rust functions into runtime tools with generated JSON Schema and argument decoding:
use *;
async
You can define tools in three main ways:
#[tool]for the best Rust DXTooltrait implementations for full controlClosureToolfor fast inline utilities
Key tooling types:
| Type | Purpose |
|---|---|
#[tool] |
Generate a Tool implementation from a function |
Schema |
Derive JSON Schema for typed input structs |
Tool |
Core trait for tool execution |
ToolRegistry |
Shared registry for tool lookup and execution |
ClosureTool |
Lightweight runtime tool defined from a closure |
Agent Construction Styles
Appam gives you three Rust-first ways to construct agents:
1. Agent::quick(...)
Use this when you want the smallest amount of setup.
Best for:
- scripts
- prototypes
- simple services
- smoke tests against a provider
2. Agent::new(name, model)
This is the ergonomic middle ground. It returns an AgentBuilder with provider detection already applied.
let agent = new
.prompt
.tool // Reuse any Tool generated with #[tool]
.build?;
3. AgentBuilder
Use this when you need explicit provider configuration and runtime control:
- reasoning or thinking settings
- prompt caching
- retry behavior
- rate limiting
- traces and session history
- Azure or Bedrock-specific provider setup
Streaming, Sessions, and Traces
Streaming is a first-class part of the runtime, not an afterthought. Every run can emit structured events for text, reasoning, tool calls, tool results, usage updates, and completion.
Closure-based streaming
For most applications, use agent.stream(...) and attach handlers:
use *;
let session = agent
.stream
.on_session_started
.on_content
.on_reasoning
.on_tool_call
.on_tool_result
.on_tool_failed
.on_error
.on_done
.run
.await?;
Reusable consumers
If you need reusable pipelines, Appam also exposes StreamConsumer plus built-in consumers such as:
ConsoleConsumerChannelConsumerCallbackConsumerTraceConsumer
Session persistence
If you want continuation across runs, enable history on the agent:
use *;
let agent = new
.prompt
.enable_history
.history_db_path
.auto_save_sessions
.build?;
let first = agent.run.await?;
let second = agent
.continue_session
.await?;
println!;
For direct history operations, use SessionHistory:
use *;
let mut config = default;
config.enabled = true;
config.db_path = "data/sessions.db".into;
let history = new.await?;
let sessions = history.list_sessions.await?;
println!;
Traces
Enable built-in traces on the agent when you want replayable, inspectable runs:
use *;
let agent = new
.prompt
.enable_traces
.trace_format
.build?;
This gives you structured event output that is much easier to inspect than plain console logs.
Providers
Appam exposes one orchestration surface across multiple LLM providers:
| Provider | Runtime path |
|---|---|
| Anthropic Messages API | LlmProvider::Anthropic or anthropic/... / claude-* model strings |
| OpenAI Responses API | LlmProvider::OpenAI or openai/... / gpt-* / o1-* / o3-* model strings |
| OpenAI Codex Responses API | LlmProvider::OpenAICodex or openai-codex/... model strings |
| OpenRouter Responses API | LlmProvider::OpenRouterResponses or openrouter/... model strings |
| OpenRouter Completions API | LlmProvider::OpenRouterCompletions |
| Google Vertex AI | LlmProvider::Vertex or vertex/... / gemini-* / google/gemini-* model strings |
| Azure OpenAI | LlmProvider::AzureOpenAI { .. } |
| Azure Anthropic | LlmProvider::AzureAnthropic { .. } |
| AWS Bedrock | LlmProvider::Bedrock { .. } |
Notes:
Agent::quick(...)andAgent::new(...)auto-detect common providers from the model string.- Unknown model strings fall back to OpenRouter Responses.
- Azure and Bedrock are best configured explicitly through
AgentBuilder.
Operational Controls
Appam includes the runtime controls that usually get bolted on later:
- retries with exponential backoff
- reasoning configuration for provider families that support it
- Anthropic thinking, caching, tool choice, and rate limiting
- OpenRouter provider preferences and transform controls
- OpenAI service tier and text verbosity settings
- maximum continuations and required completion tools for long-running flows
That lets you keep the orchestration layer inside Rust instead of scattering runtime rules across wrapper scripts.
Examples and Docs
Start here
- Getting started: installation
- Getting started: quickstart
- Getting started: first agent with tools
- Core concepts: agents
- Core concepts: tools
- Core concepts: streaming
- Core concepts: sessions
- Core concepts: providers
Working examples
- Anthropic coding agent
- OpenAI coding agent
- OpenAI Codex coding agent
- OpenRouter Responses coding agent
- OpenRouter Completions coding agent
- Vertex coding agent
- Azure OpenAI coding agent
- Azure Anthropic coding agent
- Bedrock coding agent
API docs
Development