agentix
A Rust framework for building LLM agents and multi-agent pipelines. Supports DeepSeek, OpenAI, Anthropic (Claude), and Google Gemini out of the box — plus any OpenAI-compatible endpoint.
Built on a pure stream-based architecture. Agents and nodes are stream transformers (Node) that can be easily chained to build complex, branching, and looping workflows using native Rust control flow.
Quickstart
[]
= "0.5"
= { = "1", = ["full"] }
= "0.3"
use AgentEvent;
use StreamExt;
async
Providers
Four built-in providers, all using the same builder API:
// DeepSeek (default model: deepseek-chat)
let mut agent = deepseek;
// OpenAI (default model: gpt-4o)
let mut agent = openai;
// Anthropic / Claude (default model: claude-opus-4-5)
let mut agent = anthropic;
// Gemini (default model: gemini-2.0-flash)
let mut agent = gemini;
// Any OpenAI-compatible endpoint (e.g. OpenRouter)
let mut agent = openai
.base_url
.model;
Agent API
Agent is the primary entry point. It lazily starts an internal runtime on first use.
chat() — one-shot, lazy stream
Sends a message and returns a stream of events for this turn only. The stream ends when Done is emitted.
let mut stream = agent.chat.await?;
while let Some = stream.next.await
send() + subscribe() — fire-and-forget / multi-consumer
send() dispatches a message without waiting. subscribe() returns a continuous BoxStream that receives all future events and never stops at Done. Both &str/String and raw AgentInput values are accepted by send().
use AgentInput;
// Send a user message
agent.send.await?;
// Send an abort signal
agent.send.await?;
// Subscribe to the raw event stream
let mut rx = agent.subscribe;
while let Some = rx.next.await
sender() — share the channel with spawned tasks
let tx = agent.sender; // mpsc::Sender<AgentInput>
spawn;
add_tool() — add tools at runtime
// Before first use (builder-style)
let mut agent = deepseek.tool;
// After first use (async, takes effect immediately)
agent.add_tool.await;
usage() — token accounting
println!; // UsageStats { prompt_tokens, completion_tokens, ... }
Events — The Communication Layer
AgentInput (what you send)
User(Vec<UserContent>)— new conversation turn (alsoFrom<&str>/From<String>)ToolResult { call_id, result }— provide a tool execution resultAbort— immediately stop current processing
AgentEvent (what you receive)
Token(String)— incremental response textReasoning(String)— thinking/reasoning trace (e.g. DeepSeek-R1)ToolCall(ToolCall)— model wants to call a toolToolProgress { name, progress, .. }— streaming tool outputToolResult { name, result, .. }— tool finishedUsage(UsageStats)— token usage for this turnDone— turn completeError(String)— an error occurred
Defining Tools
Annotate impl Tool for YourStruct with #[tool]. Each method becomes a callable tool.
use tool;
use ;
;
let mut agent = deepseek.tool;
- Doc comment → tool description
/// param: descriptionlines → argument descriptionsResult::Errautomatically propagates as{"error": "..."}to the LLM
Streaming tools
Add #[streaming] to a method inside #[tool]. No return type annotation needed — the macro infers it. Use async_stream::stream! in the body for yield syntax.
use ;
;
Normal and streaming methods can be freely mixed in the same #[tool] block.
Memory & Context
use ;
// Keep the last N turns
let mut agent = deepseek.memory;
// Keep up to N tokens of history
let mut agent = deepseek.memory;
// Auto-summarise old messages with the LLM when exceeding N tokens
let mut agent = deepseek.memory;
Nodes & Composition
For advanced multi-agent pipelines, use [AgentNode] (a raw stream transformer) and compose it with other [Node]s.
use ;
let prompt_node = new;
// Chain: String -> PromptNode -> AgentInput -> AgentNode -> AgentEvent
let input = iter.boxed;
let agent_input = prompt_node.run;
let mut output = scorer_node.run;
Reliability
- Automatic retries — exponential backoff for 429 / 5xx responses
- HTTP timeouts — 10 s connect, 120 s response (overridable via
Agent::with_http) - Concurrent tool execution — multiple tool calls in one turn run in parallel
- Safe memory truncation —
SlidingWindownever splitstool_call/tool_resultpairs - Usage tracking — per-turn and cumulative token accounting across all providers
MCP Tools
Use external processes as tools via the Model Context Protocol:
use McpTool;
use Duration;
let tool = stdio.await?
.with_timeout;
let mut agent = deepseek.tool;
Changelog
0.5.0
- New
AgentAPI —chat(),send(),subscribe(),sender(),add_tool(),abort(),usage()chat(text)returns a lazyBoxStream(ends atDone), backed bytokio::broadcastsend(input)accepts&str,String, orAgentInputdirectly (Fromimpls)subscribe()returns a continuousBoxStreamthat never stops atDoneadd_tool()inserts tools into the live registry after the runtime has started
- Concurrent tool execution — multiple tool calls in a single turn now run via
FuturesUnordered SlidingWindowfix — truncation now skips orphanedToolResult/tool_callmessages to avoid malformed historiesLlmSummarizerfix — summary is injected as auser/assistantpair, satisfying strict alternating-role providers (Anthropic, Gemini)estimate_tokensfix — BPE tokeniser is now initialised once viaOnceLockinstead of being rebuilt on every call- Default HTTP timeouts — 10 s connect timeout, 120 s response timeout
- Removed
Sessionabstraction —Agentmanages the runtime directly
0.4.x
- Initial
Session-based multi-turn API - DeepSeek, OpenAI, Anthropic, Gemini providers
#[tool]and#[streaming_tool]macros- Memory backends:
InMemory,SlidingWindow,TokenSlidingWindow,LlmSummarizer - MCP tool support
Contributing
PRs welcome. Built with 🦀 in Rust.
License
MIT OR Apache-2.0