Expand description
§agnt
A dense, sync-first Rust agent engine. Multi-backend LLM inference with streaming, parallel tool dispatch, SQLite session persistence, and microsecond-level tool profiling — no async runtime required. Around 6,000 LOC across seven crates as of v0.3.1; see the repository README for the live breakdown.
§Quick start
use agnt::{Agent, Backend};
let backend = Backend::ollama("gemma4:e4b");
let mut agent = Agent::new(backend, "You are a helpful assistant.");
agent.tools.register(Box::new(agnt::builtins::ReadFile::new()));
let reply = agent.step("Read /etc/hostname and tell me the hostname.").unwrap();
println!("{}", reply);§Architecture (v0.3.1 — seven-crate workspace)
The flagship agnt crate is a thin re-export over six underlying
library crates. Everything is feature-gated so consumers can pick
the slice they need — WASM / embedded callers depend only on
agnt-core.
agnt-core— traits, types, agent loop, quotas, observer hooks. Zero I/O dependencies.agnt-net— HTTP backend implementation (Ollama / OpenAI / Anthropic).netfeature.agnt-store— SQLite message store with µs-precision tool log.storefeature.agnt-tools— built-in tools with filesystem sandbox, atomic SSRF-guarded Fetch, and opt-in Shell (plusbwrap-shellon Linux).toolsfeature.agnt-macros—#[tool]attribute macro.macrosfeature (default on).agnt-mcp— MCP stdio client.mcpfeature (off by default).
default = ["net", "store", "tools", "macros"] gives you the
working runtime from a single cargo add agnt. Opt in to mcp
and tools-bwrap-shell as needed.
§Design principles
- Sync-first. No tokio required. Tool dispatch uses
std::thread::scopefor parallelism without an async runtime. - Structurally sandboxed. Filesystem root, atomic SSRF resolver, opt-in Shell, optional bubblewrap — each layer is designed assuming the LLM output is hostile.
- Multi-backend from day one. One internal
Messagetype; providers translate at the wire boundary. - Auditable by module. Security-critical paths (agent loop, tools, sandbox, SSRF resolver, MCP framing) each live in a single small file so reviewers can read them in isolation.
See the README for benchmarks, the current threat model, and the roadmap.
Modules§
- agent
- Alias the agent loop module for explicit access.
- backend
- builtins
- http
- store
- tool
- Alias the tool module for explicit access.
Structs§
- Agent
- The agent loop.
- Agent
Builder - Fluent builder for
Agent. - Backend
- A multi-provider LLM backend.
- Erased
Adapter - Adapter that turns any
TypedToolinto an erasedTool. - Filesystem
Root - A canonicalized sandbox root. All paths resolved through this instance are guaranteed to live under the root directory on the local filesystem.
- Function
Call - Message
- A conversation message in the OpenAI-flavored internal format.
- Registry
- A collection of tools with name-based dispatch.
- Store
- SQLite-backed session store.
- Tool
Call - ToolLog
- A tool execution record to persist.
- Tool
Quota - Per-tool quota (v0.3 M3).
- Tool
Result - Result of a tool execution, passed to observers after dispatch.
Enums§
- Backend
Error - Error returned by
LlmBackend::chat. - Disposition
- Disposition returned by
Observer::should_dispatch— whether a tool call should proceed, be refused, or be intercepted. - Store
Error - Error returned by
MessageStoreoperations.
Traits§
- LlmBackend
- Abstract LLM backend.
- Message
Store - Abstract session persistence.
- Observer
- Lifecycle observer. Every method has a default no-op implementation so implementors override only the hooks they care about.
- Tool
- A tool the agent can invoke (erased form).
- Typed
Tool - A typed tool — associated input/output/error types, schema as const.
Attribute Macros§
- tool
- Generate a [
TypedTool] impl from a free function.