atomr-agents
A native Rust agentic framework built as a layered actor / strategy / harness substrate on top of atomr and atomr-infer. atomr-agents gives you a single mental model — pluggable strategies that resolve under shared budgets, channelled state with first-class checkpointing, tool-call orchestration with parallel dispatch, and durable harness loops — that scales from a one-off chatbot to a multi-tenant production agent platform.
use *;
// One Pipeline composes prompt → model → parser like LCEL.
let pipeline = from
.then
.then
.build;
let answer = pipeline.call.await?;
Why an agentic framework, in Rust, on actors
Agentic systems don't fail because the models aren't good enough — they fail because the substrate underneath them treats context, composition, and persistence as afterthoughts. Glue-code retry policies, opaque memory, hand-rolled tool loops, brittle handoff between agents — that's where 3 a.m. pages come from.
Composition is the unit of work. A real agent is a Pipeline of
prompts, models, parsers, and tools — each with its own retry,
fallback, timeout, cache, and trace policy. atomr-agents makes every
component a Callable with the same composition surface, so
with_retry, with_fallbacks, and with_config apply uniformly to
prompts, models, retrievers, and parsers alike.
State is channelled, durable, and forkable. Long-running agents
need more than chat history. They need typed channels with
reducers (AppendMessages, MergeMap, LastWriteWins,
MaxByTimestamp), per-super-step checkpoints keyed by (workflow, run, step), and fork-with-edit so an operator can branch a
divergent run from any prior state. atomr-agents ships LangGraph's
state model verbatim in atomr's actor idiom.
Tool calls are parallel and provider-agnostic. When a model emits
five tool calls in one turn, atomr-agents fans them into a JoinSet
and aggregates in original order. The streaming tool_call_delta
parser handles OpenAI and Anthropic deltas natively; new providers
plug in behind the same Provider enum. Per-call deltas are also
surfaced as Event::ToolCallStreamed so tracers and UIs see tool
intent in real time, distinct from the post-call Event::ToolInvoked.
RichTool returns ToolReturn::{Content, ContentAndArtifact, Command}
so a tool can also drive graph control flow.
Provider runtimes are opt-in feature flags. Enable
provider-anthropic, provider-openai, or provider-gemini on the
umbrella to pull the corresponding atomr-infer-runtime-* crate and
re-export its *Config / *Pricing / *Runner via
atomr_agents::agent::providers::{anthropic, openai, gemini}. Cost
reports include cached_tokens (Anthropic prompt-cache, OpenAI cached
input) and reasoning_tokens (o1-style) automatically.
Granular efficiency. Rust gives us deterministic resource use,
zero-cost abstractions, and ownership-as-concurrency-safety. Strategy
trait generics monomorphize the per-turn pipeline; Box<dyn> opt-in
exists for config-driven loading. The whole 26-crate workspace builds
under cargo check --workspace in seconds and ships zero runtime
overhead beyond what the actor crate already pays.
What's in the box
| Crate | What it does |
|---|---|
atomr-agents |
Umbrella facade re-exporting the public surface, feature-flag-driven |
atomr-agents-core |
Ids, budgets (token / time / money / iteration), AgentContext, RunId, structured Event taxonomy, error types |
atomr-agents-callable |
Callable trait, CallableHandle, Pipeline builder (then / fan_out / assign), decorators (with_retry / with_fallbacks / with_config / with_timeout / Branch / Lambda) |
atomr-agents-strategy |
Strategy trait family (ToolStrategy, MemoryStrategy, SkillStrategy, RoutingStrategy, PolicyStrategy, LoopStrategy, TerminationStrategy) + combinators |
atomr-agents-context |
ContextAssembler — priority-merge under a TokenBudget |
atomr-agents-observability |
EventBus, RunTree builder, Tracer trait, StdoutTracer / JsonlTracer / LangSmithTracer |
atomr-agents-state |
StateSchema + 5 reducers, RunState, Checkpointer trait + InMemoryCheckpointer, fork-with-edit; SQLite/Postgres backend stubs behind features |
atomr-agents-tool |
Tool / RichTool traits, ToolDescriptor, ToolSet + ToolSetRegistry, PermissionSpec, provider-aware ToolCallParser (OpenAI / Anthropic), HandoffTool |
atomr-agents-skill |
Skill, SkillSet, Static / Keyword skill strategies |
atomr-agents-memory |
MemoryStore (short-term) + LongStore (long-term, namespace-tupled), RecencyMemoryStrategy / SummarizingMemoryStrategy / ChainedMemoryStrategy, WriteMemoryTool / UpdateMemoryTool / RecallMemoryTool |
atomr-agents-embed |
Embedder trait, MockEmbedder, AnnIndex + InMemoryAnnIndex, EmbeddingToolStrategy |
atomr-agents-retriever |
Retriever zoo: Bm25 / Vector / MultiQuery / ContextualCompression / ParentDocument / Ensemble (RRF) / SelfQuery / EmbeddingsFilter / TimeWeighted |
atomr-agents-ingest |
Loader (text / md / json / csv) + splitters (Recursive / MarkdownHeader / Code / Token / Semantic) + CachedEmbedder + IngestPipeline |
atomr-agents-persona |
All five structural strategies (Static, BigFive, Mbti, Jungian, Composite) + emphasis strategies (Static, AudienceAdaptive, TaskAdaptive, MoodState, GoalConditioned) |
atomr-agents-instruction |
ComposedInstructionStrategy<P, T, B>, ChatPromptTemplate, MessagesPlaceholder, FewShotChatTemplate, LengthBasedSelector / SemanticSimilaritySelector |
atomr-agents-agent |
Agent<I, T, Ms, Sk> actor + per-turn pipeline, tool-call loop with parallel dispatch, AgentMiddleware (logging / retry / rate-limit / redaction / tool-error-recovery), InferenceClient adapter for any ModelRunner |
atomr-agents-workflow |
DAG primitives, WorkflowRunner, StatefulRunner (channelled state), Interruptible (interrupt() + interrupt_before / _after + Command::{Continue, Resume, Update, Goto}), Subgraph, dispatch_fan_out (Send-API analogue) |
atomr-agents-harness |
Harness<L, T> actor, LoopStrategy / TerminationStrategy, durable iteration log; Harness is itself a Callable |
atomr-agents-org |
Org / Department / Team, OrgRoutingStrategy impls (RoundRobin / LoadAware / CapabilityMatch), Policy::narrow, NamespacedMemory (read-cascade + write-gating), swarm_loop helper |
atomr-agents-registry |
Versioned artifact registry with (kind, id, version) keys + publish_gated for eval-regression blocking |
atomr-agents-eval |
EvalSuite, Scorer (Contains / Equality / Regex / LlmJudgeScorer / RubricScorer / PairwiseScorer), RegressionGate, AnnotationQueue |
atomr-agents-cache |
LlmCache trait + InMemoryLlmCache + SemanticLlmCache (cosine match on prompt embedding); SQLite/Redis backend stubs behind features |
atomr-agents-parser |
Parser<T> trait, JsonParser / JsonSchemaParser / SchemaParser<T> / EnumParser / CommaListParser / XmlParser / YamlParser, OutputFixingParser, RetryWithErrorParser, StreamingPartialJsonParser |
atomr-agents-py-bindings |
atomr_agents._native PyO3 module — Event / EventBus / Registry exposed to Python |
atomr-agents-cli |
atomr-agents binary with eval / registry / harness / serve (Studio-style read+resume inspector) subcommands |
atomr-agents-testkit |
Stub crate today. For tests, depend on atomr-infer-testkit (re-exports MockRunner / MockScript) directly — that's what crates/agent tests use. |
Plus a Python facade — pip install atomr-agents — that exposes the
host-mode Registry / EventBus and the guest-mode @tool /
@strategy / @persona decorators.
Quick start (Rust)
The umbrella crate is published on crates.io as atomr-agents:
[]
= { = "0.2", = ["agent", "harness", "eval"] }
= { = "0.6", = ["openai"] } # or any provider
Or, to pull a provider runtime through the umbrella so Agent /
LocalRunnerClient / OpenAiRunner come from one crate:
= { = "0.2", = ["agent", "provider-openai"] }
# or features = ["agent", "provider-anthropic"], ["agent", "provider-gemini"]
A minimal agent against MockRunner (good for tests; swap for any
ModelRunner in production):
use Arc;
use *;
use ;
use ;
use ;
use StaticSkillStrategy;
use StaticPersonaStrategy;
use ;
use EventBus;
use ;
let runner = new;
let inference: =
new;
let agent = Agent ;
let r = agent
.run_turn
.await?;
println!;
Add tools, switch the MockRunner to a real ModelRunner (OpenAI,
Anthropic, vLLM, …), and the same code runs unchanged.
Quick start (Python)
=
=
See docs/python.md for the full host/guest model and the
subinterpreter-pool dispatcher pattern inherited from atomr's pycore.
Documentation map
docs/index.md— documentation hubdocs/architecture.md— runtime layout, crate stack, where each layer slots indocs/state-and-checkpointing.md— channels, reducers,Checkpointer, fork/replaydocs/agent-pipeline.md— the per-turn pipeline + tool-call loop + middlewaredocs/workflows-and-hitl.md— DAG, Send-API, dynamic interrupts, breakpointsdocs/retrieval-and-ingestion.md— retriever zoo,LongStore, loaders, splittersdocs/observability.md—EventBus,RunTree, tracersdocs/eval.md— eval suites, judge / pairwise / rubric scorers, regression gatedocs/multi-agent-patterns.md— supervisor / swarm / network / hierarchicaldocs/feature-matrix.md— every feature flag, what it pulls indocs/python.md— Python bindings + subinterpreter-pool guest modedocs/migrating-from-langgraph.md— concept-mapping table + concrete code translationsai-skills/— Claude Code / Agent SDK skills for AI-assisted coding against atomr-agents
License
Apache-2.0.