agentix
Stateless, multi-provider LLM client for Rust — streaming, non-streaming, tool calls, and MCP support.
DeepSeek · OpenAI · Anthropic · Gemini — one unified API.
Installation
[]
= "0.7"
# Optional: Model Context Protocol (MCP) tool support
# agentix = { version = "0.7", features = ["mcp"] }
Quick Start
use ;
use StreamExt;
async
Providers
Four built-in providers, all using the same API:
use LlmClient;
// DeepSeek (default model: deepseek-chat)
let client = deepseek;
// OpenAI (default model: gpt-4o)
let client = openai;
// Anthropic / Claude (default model: claude-opus-4-5)
let client = anthropic;
// Gemini (default model: gemini-2.0-flash)
let client = gemini;
// Any OpenAI-compatible endpoint (e.g. OpenRouter)
let client = openai;
client.base_url;
client.model;
// From config strings (useful for dynamic provider selection)
let client = from_parts;
LlmClient API
[LlmClient] is stateless — the caller owns message history and tool dispatch.
All clones share the same config and HTTP connection pool.
stream() — streaming completion
let mut stream = client.stream.await?;
while let Some = stream.next.await
complete() — non-streaming completion
let resp = client.complete.await?;
println!;
println!;
println!;
println!;
Configuration
client.model;
client.base_url;
client.system_prompt;
client.max_tokens;
client.temperature;
// Read current config
let snap = client.snapshot;
LlmEvent (what you receive from stream())
Token(String)— incremental response textReasoning(String)— thinking/reasoning trace (e.g. DeepSeek-R1)ToolCallChunk(ToolCallChunk)— partial tool call for real-time UIToolCall(ToolCall)— completed tool callUsage(UsageStats)— token usage for the turnDone— stream endedError(String)— provider error
Defining Tools
Annotate impl Tool for YourStruct with #[tool]. Each method becomes a callable tool.
use tool;
use ;
;
- Doc comment → tool description
/// param: descriptionlines → argument descriptionsResult::Errautomatically propagates as{"error": "..."}to the LLM
Streaming tools
Add #[streaming] to yield ToolOutput::Progress / ToolOutput::Result incrementally:
use ;
;
Normal and streaming methods can be freely mixed in the same #[tool] block.
MCP Tools
Use external processes as tools via the Model Context Protocol:
use McpTool;
use Duration;
let tool = stdio.await?
.with_timeout;
// Add to a ToolBundle alongside regular tools
let mut bundle = new;
bundle.push;
Reliability
- Automatic retries — exponential backoff for 429 / 5xx responses
- HTTP timeouts — 10 s connect, 120 s response (overridable via
LlmClient::with_http) - Usage tracking — per-request token accounting across all providers
Changelog
0.7.0
- Removed
Agentstruct —LlmClientis now the sole entry point; callers own the loop - Removed
Memorytrait —InMemory,SlidingWindow,TokenSlidingWindow,LlmSummarizerremoved - Removed
AgentEvent/AgentInput— onlyLlmEventremains - New
LlmClient::complete()— native non-streaming API for all four providers - New
CompleteResponse— content, reasoning, tool_calls, usage in one struct
0.6.0
- Non-streaming
complete()method onProvidertrait post_jsonhelper for non-streaming HTTP POST with retryCompleteResponsetype
0.5.0
AgentAPI withchat(),send(),subscribe(),add_tool(),abort(),usage()- Concurrent tool execution via
FuturesUnordered SlidingWindowfix for orphaned tool messages- Default HTTP timeouts (10 s connect, 120 s response)
0.4.x
- Initial multi-turn API
- DeepSeek, OpenAI, Anthropic, Gemini providers
#[tool]and#[streaming]macros- Memory backends, MCP tool support
License
MIT OR Apache-2.0