agentix
Multi-provider LLM client for Rust — streaming, non-streaming, tool calls, agentic loops, and MCP support.
DeepSeek · OpenAI · Anthropic · Gemini — one unified API.
Installation
[]
= "0.9"
# Optional: Model Context Protocol (MCP) tool support
# agentix = { version = "0.9", features = ["mcp"] }
Quick Start
use ;
use StreamExt;
async
Providers
Four built-in providers, all using the same API:
use Request;
// Shortcut constructors (provider + default model in one call)
let req = deepseek;
let req = openai;
let req = anthropic;
let req = gemini;
// Any OpenAI-compatible endpoint (e.g. OpenRouter)
let req = openai
.base_url
.model;
Request API
Request is a self-contained value type — it carries provider, credentials, model,
messages, tools, and tuning. Call stream() or complete() with a shared reqwest::Client.
stream() — streaming completion
let http = new;
let mut stream = new
.system_prompt
.user
.stream
.await?;
while let Some = stream.next.await
complete() — non-streaming completion
let resp = new
.user
.complete
.await?;
println!;
println!;
println!;
println!;
Builder methods
let req = new
.model
.base_url
.system_prompt
.max_tokens
.temperature
.retries // max retries, initial delay ms
.user // convenience for adding a user message
.message // add any Message variant
.messages // set full history
.tools; // set tool definitions
LlmEvent (what you receive from stream())
Token(String)— incremental response textReasoning(String)— thinking/reasoning trace (e.g. DeepSeek-R1)ToolCallChunk(ToolCallChunk)— partial tool call for real-time UIToolCall(ToolCall)— completed tool callUsage(UsageStats)— token usage for the turnDone— stream endedError(String)— provider error
Defining Tools
Two styles are supported: standalone function (simpler) and impl block (multiple tools in one struct).
Standalone function
use tool;
/// Add two numbers.
/// a: first number
/// b: second number
async
/// Divide a by b.
async
// Combine with + operator
let tools = add + divide;
let mut stream = agent;
The macro generates a unit struct with the same name as the function and implements Tool for it.
Impl block (multiple methods per struct)
;
- Doc comment → tool description
/// param: descriptionlines → argument descriptionsResult::Errautomatically propagates as{"error": "..."}to the LLM
Streaming tools
Add #[streaming] to yield ToolOutput::Progress / ToolOutput::Result incrementally:
use ;
;
Normal and streaming methods can be freely mixed in the same #[tool] block.
MCP Tools
Use external processes as tools via the Model Context Protocol:
use McpTool;
use Duration;
let tool = stdio.await?
.with_timeout;
// Add to a ToolBundle alongside regular tools
let mut bundle = new;
bundle.push;
Runtime add / remove
let mut bundle = default;
bundle += Calculator; // AddAssign — add tool in-place
bundle -= Calculator; // SubAssign — remove all functions Calculator provides
let bundle2 = bundle + Calculator - Calculator; // Sub — returns new bundle
Structured Output
Constrain the model to emit JSON matching a Rust struct using Request::json_schema().
Derive schemars::JsonSchema on your struct and pass the generated schema:
use JsonSchema;
use ;
let schema = to_value?;
let response = openai
.system_prompt
.user
.json_schema // strict=true enforces the schema
.complete
.await?;
let review: Review = response.json?;
See examples/08_structured_output.rs for a runnable example.
Provider support:
- OpenAI — full
json_schemasupport (gpt-4o and later) - Gemini —
responseSchema+responseMimeType: application/json(fully supported) - DeepSeek —
json_objectonly;json_schemais automatically degraded with atracing::warn - Anthropic —
response_formatis ignored; use prompt engineering instead
Reliability
- Automatic retries — exponential backoff for 429 / 5xx responses
- Usage tracking — per-request token accounting across all providers;
AgentEvent::Donecontains cumulative totals across all turns
Agent (agentic loop)
agentix::agent() drives the full LLM ↔ tool-call loop and yields typed AgentEvents.
Pass it a ToolBundle, a base Request, and an initial history — it handles
repeated LLM calls, tool execution, and history accumulation automatically.
use ;
use StreamExt;
async
AgentEvent variants
Token(String)— incremental response textReasoning(String)— thinking traceToolCallChunk(ToolCallChunk)— streaming partial tool callToolCallStart(ToolCall)— complete tool call, about to executeToolProgress { id, name, progress }— intermediate tool outputToolResult { id, name, content }— final tool resultUsage(UsageStats)— token usage per LLM requestDone(UsageStats)— emitted once when the loop finishes normally; contains cumulative totals across all turnsWarning(String)— recoverable stream errorError(String)— fatal error
agentix::agent() returns a BoxStream<'static, AgentEvent> — drop it to abort.
Changelog
0.9.0
- New
agentix::agent()free function — stateless agentic loop:agent(tools, client, request, history, history_budget) - New
AgentEventenum —Token,Reasoning,ToolCallChunk,ToolCallStart,ToolProgress,ToolResult,Usage,Done,Warning,Error AgentEvent::Done(UsageStats)— cumulative token usage across all turns, emitted once on normal completion- Concurrent tool execution — all tool calls in one LLM turn run in parallel via
select_all; progress events arrive in real time Request::deepseek/openai/anthropic/gemini(key)— shortcut constructorsRequest::json_schema(name, schema, strict)— structured output with JSON SchemaToolBundle::remove(name)— runtime tool removalArc<dyn Tool>implementsTool— pass a shared bundle without wrapping- tracing integration —
debug!spans around LLM requests and tool execution (no feature flag needed, uses thetracingcrate)
0.8.0
- Replaced
LlmClientwithRequest— self-contained value type with builder pattern - Replaced
Providertrait withProviderenum —DeepSeek,OpenAI,Anthropic,Gemini - Removed shared mutable state —
RequestisClone,Send,Sync; caller passes&reqwest::Client - Removed
AgentConfigfrom public API — all config lives inRequestfields
0.7.0
- Removed
Agentstruct —LlmClientis now the sole entry point; callers own the loop - Removed
Memorytrait —InMemory,SlidingWindow,TokenSlidingWindow,LlmSummarizerremoved - Removed
AgentEvent/AgentInput— onlyLlmEventremains - New
LlmClient::complete()— native non-streaming API for all four providers - New
CompleteResponse— content, reasoning, tool_calls, usage in one struct
0.6.0
- Non-streaming
complete()method onProvidertrait post_jsonhelper for non-streaming HTTP POST with retryCompleteResponsetype
0.5.0
AgentAPI withchat(),send(),subscribe(),add_tool(),abort(),usage()- Concurrent tool execution via
FuturesUnordered SlidingWindowfix for orphaned tool messages- Default HTTP timeouts (10 s connect, 120 s response)
0.4.x
- Initial multi-turn API
- DeepSeek, OpenAI, Anthropic, Gemini providers
#[tool]and#[streaming]macros- Memory backends, MCP tool support
License
MIT OR Apache-2.0