agentix
Multi-provider LLM client for Rust — streaming, non-streaming, tool calls, agentic loops, and MCP support.
DeepSeek · OpenAI · Anthropic · Gemini · Kimi · GLM · MiniMax · Grok — one unified API.
vs. rig / llm-chain
| agentix | rig | llm-chain | |
|---|---|---|---|
| Agentic loop | ✅ agent() built-in |
manual | manual |
| Streaming tool calls | ✅ #[streaming] |
❌ | ❌ |
| MCP support | ✅ | ❌ | ❌ |
| Proc-macro tools | ✅ #[tool] |
✅ #[tool] |
❌ |
| Concurrent tool execution | ✅ | ❌ | ❌ |
| Provider support | 8 | 10+ | 4 |
| API style | value-type Request |
builder | builder |
agentix trades breadth of provider support for depth: built-in agentic loop, real-time streaming tool output, and first-class MCP integration.
Installation
[]
= "0.9"
# Optional: Model Context Protocol (MCP) tool support
# agentix = { version = "0.9", features = ["mcp"] }
Quick Start
use ;
use StreamExt;
async
Providers
Eight built-in providers, all using the same API:
use Request;
// Shortcut constructors (provider + default model in one call)
let req = deepseek;
let req = openai;
let req = anthropic;
let req = gemini;
let req = kimi; // Moonshot AI — kimi-k2.5
let req = glm; // Zhipu AI — glm-5
let req = minimax; // MiniMax — MiniMax-M2.7 (Anthropic API)
let req = grok;
// Any OpenAI-compatible endpoint (e.g. OpenRouter)
let req = openai
.base_url
.model;
Request API
Request is a self-contained value type — it carries provider, credentials, model,
messages, tools, and tuning. Call stream() or complete() with a shared reqwest::Client.
stream() — streaming completion
let http = new;
let mut stream = new
.system_prompt
.user
.stream
.await?;
while let Some = stream.next.await
complete() — non-streaming completion
let resp = new
.user
.complete
.await?;
println!;
println!;
println!;
println!;
Builder methods
let req = new
.model
.base_url
.system_prompt
.max_tokens
.temperature
.retries // max retries, initial delay ms
.user // convenience for adding a user message
.message // add any Message variant
.messages // set full history
.tools; // set tool definitions
LlmEvent (what you receive from stream())
Token(String)— incremental response textReasoning(String)— thinking/reasoning trace (e.g. DeepSeek-R1)ToolCallChunk(ToolCallChunk)— partial tool call for real-time UIToolCall(ToolCall)— completed tool callUsage(UsageStats)— token usage for the turnDone— stream endedError(String)— provider error
Defining Tools
Two styles are supported: standalone function (simpler) and impl block (multiple tools in one struct).
Standalone function
use tool;
/// Add two numbers.
/// a: first number
/// b: second number
async
/// Divide a by b.
async
// Combine with + operator
let tools = add + divide;
let mut stream = agent;
The macro generates a unit struct with the same name as the function and implements Tool for it.
Impl block (multiple methods per struct)
;
- Doc comment → tool description
/// param: descriptionlines → argument descriptionsResult::Errautomatically propagates as{"error": "..."}to the LLM
Streaming tools
Add #[streaming] to yield ToolOutput::Progress / ToolOutput::Result incrementally:
use ;
;
Normal and streaming methods can be freely mixed in the same #[tool] block.
MCP Tools
Use external processes as tools via the Model Context Protocol:
use McpTool;
use Duration;
let tool = stdio.await?
.with_timeout;
// Add to a ToolBundle alongside regular tools
let mut bundle = new;
bundle.push;
Runtime add / remove
let mut bundle = default;
bundle += Calculator; // AddAssign — add tool in-place
bundle -= Calculator; // SubAssign — remove all functions Calculator provides
let bundle2 = bundle + Calculator - Calculator; // Sub — returns new bundle
Structured Output
Constrain the model to emit JSON matching a Rust struct using Request::json_schema().
Derive schemars::JsonSchema on your struct and pass the generated schema:
use JsonSchema;
use ;
let schema = to_value?;
let response = openai
.system_prompt
.user
.json_schema // strict=true enforces the schema
.complete
.await?;
let review: Review = response.json?;
See examples/08_structured_output.rs for a runnable example.
Provider support:
- OpenAI — full
json_schemasupport (gpt-4o and later) - Gemini —
responseSchema+responseMimeType: application/json(fully supported) - DeepSeek —
json_objectonly;json_schemais automatically degraded with atracing::warn - Anthropic —
response_formatis ignored; use prompt engineering instead
Reliability
- Automatic retries — exponential backoff for 429 / 5xx responses
- Usage tracking — per-request token accounting across all providers;
AgentEvent::Donecontains cumulative totals across all turns
Agent (agentic loop)
agentix::agent() drives the full LLM ↔ tool-call loop and yields typed AgentEvents.
Pass it a ToolBundle, a base Request, and an initial history — it handles
repeated LLM calls, tool execution, and history accumulation automatically.
use ;
use StreamExt;
async
AgentEvent variants
Token(String)— incremental response textReasoning(String)— thinking traceToolCallChunk(ToolCallChunk)— streaming partial tool callToolCallStart(ToolCall)— complete tool call, about to executeToolProgress { id, name, progress }— intermediate tool outputToolResult { id, name, content }— final tool resultUsage(UsageStats)— token usage per LLM requestDone(UsageStats)— emitted once when the loop finishes normally; contains cumulative totals across all turnsWarning(String)— recoverable stream errorError(String)— fatal error
agentix::agent() returns a BoxStream<'static, AgentEvent> — drop it to abort.
License
MIT OR Apache-2.0