agentix 0.8.1

Multi-provider LLM client for Rust — streaming, non-streaming, tool calls, MCP, DeepSeek, OpenAI, Anthropic, Gemini
Documentation

agentix

Multi-provider LLM client for Rust — streaming, non-streaming, tool calls, and MCP support.

DeepSeek · OpenAI · Anthropic · Gemini — one unified API.

Installation

[dependencies]
agentix = "0.7"

# Optional: Model Context Protocol (MCP) tool support
# agentix = { version = "0.7", features = ["mcp"] }

Quick Start

use agentix::{Request, Provider, LlmEvent};
use futures::StreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let http = reqwest::Client::new();

    let mut stream = Request::new(Provider::DeepSeek, std::env::var("DEEPSEEK_API_KEY")?)
        .system_prompt("You are a helpful assistant.")
        .user("What is the capital of France?")
        .stream(&http)
        .await?;

    while let Some(event) = stream.next().await {
        match event {
            LlmEvent::Token(t) => print!("{t}"),
            LlmEvent::Done     => break,
            _ => {}
        }
    }
    println!();
    Ok(())
}

Providers

Four built-in providers, all using the same API:

use agentix::{Request, Provider};

// DeepSeek  (default model: deepseek-chat)
let req = Request::new(Provider::DeepSeek, "sk-...");

// OpenAI  (default model: gpt-4o)
let req = Request::new(Provider::OpenAI, "sk-...");

// Anthropic / Claude  (default model: claude-sonnet-4-20250514)
let req = Request::new(Provider::Anthropic, "sk-ant-...");

// Gemini  (default model: gemini-2.0-flash)
let req = Request::new(Provider::Gemini, "AIza...");

// Any OpenAI-compatible endpoint (e.g. OpenRouter)
let req = Request::new(Provider::OpenAI, "sk-or-...")
    .base_url("https://openrouter.ai/api/v1")
    .model("openrouter/free");

Request API

Request is a self-contained value type — it carries provider, credentials, model, messages, tools, and tuning. Call stream() or complete() with a shared reqwest::Client.

stream() — streaming completion

let http = reqwest::Client::new();
let mut stream = Request::new(Provider::OpenAI, "sk-...")
    .system_prompt("You are helpful.")
    .user("Hello!")
    .stream(&http)
    .await?;

while let Some(event) = stream.next().await {
    match event {
        LlmEvent::Token(t)         => print!("{t}"),
        LlmEvent::Reasoning(r)     => print!("[think] {r}"),
        LlmEvent::ToolCall(tc)     => println!("tool: {}({})", tc.name, tc.arguments),
        LlmEvent::Usage(u)         => println!("tokens: {}", u.total_tokens),
        LlmEvent::Error(e)         => eprintln!("error: {e}"),
        LlmEvent::Done             => break,
        _                          => {}
    }
}

complete() — non-streaming completion

let resp = Request::new(Provider::OpenAI, "sk-...")
    .user("What is 2+2?")
    .complete(&http)
    .await?;
println!("{}", resp.content.unwrap_or_default());
println!("reasoning: {:?}", resp.reasoning);
println!("tool_calls: {:?}", resp.tool_calls);
println!("usage: {:?}", resp.usage);

Builder methods

let req = Request::new(Provider::DeepSeek, "sk-...")
    .model("deepseek-reasoner")
    .base_url("https://custom.api/v1")
    .system_prompt("You are helpful.")
    .max_tokens(4096)
    .temperature(0.7)
    .retries(5, 2000)           // max retries, initial delay ms
    .user("Hello!")             // convenience for adding a user message
    .message(msg)               // add any Message variant
    .messages(vec![...])        // set full history
    .tools(tool_defs);          // set tool definitions

LlmEvent (what you receive from stream())

  • Token(String) — incremental response text
  • Reasoning(String) — thinking/reasoning trace (e.g. DeepSeek-R1)
  • ToolCallChunk(ToolCallChunk) — partial tool call for real-time UI
  • ToolCall(ToolCall) — completed tool call
  • Usage(UsageStats) — token usage for the turn
  • Done — stream ended
  • Error(String) — provider error

Defining Tools

Annotate impl Tool for YourStruct with #[tool]. Each method becomes a callable tool.

use agentix::tool;
use serde_json::{json, Value};

struct Calculator;

#[tool]
impl agentix::Tool for Calculator {
    /// Add two numbers.
    /// a: first number
    /// b: second number
    async fn add(&self, a: i64, b: i64) -> i64 {
        a + b
    }

    /// Divide a by b.
    async fn divide(&self, a: f64, b: f64) -> Result<f64, String> {
        if b == 0.0 { Err("division by zero".into()) } else { Ok(a / b) }
    }
}
  • Doc comment → tool description
  • /// param: description lines → argument descriptions
  • Result::Err automatically propagates as {"error": "..."} to the LLM

Streaming tools

Add #[streaming] to yield ToolOutput::Progress / ToolOutput::Result incrementally:

use agentix::{tool, ToolOutput};

struct ProgressTool;

#[tool]
impl agentix::Tool for ProgressTool {
    /// Run a long job and stream progress.
    /// steps: number of steps
    #[streaming]
    fn long_job(&self, steps: u32) {
        async_stream::stream! {
            for i in 1..=steps {
                yield ToolOutput::Progress(format!("{i}/{steps}"));
            }
            yield ToolOutput::Result(serde_json::json!({ "done": true }));
        }
    }
}

Normal and streaming methods can be freely mixed in the same #[tool] block.


MCP Tools

Use external processes as tools via the Model Context Protocol:

use agentix::McpTool;
use std::time::Duration;

let tool = McpTool::stdio("npx", &["-y", "@playwright/mcp"]).await?
    .with_timeout(Duration::from_secs(60));

// Add to a ToolBundle alongside regular tools
let mut bundle = agentix::ToolBundle::new();
bundle.push(tool);

Reliability

  • Automatic retries — exponential backoff for 429 / 5xx responses
  • Usage tracking — per-request token accounting across all providers

Changelog

0.8.0

  • Replaced LlmClient with Request — self-contained value type with builder pattern
  • Replaced Provider trait with Provider enumDeepSeek, OpenAI, Anthropic, Gemini
  • Removed shared mutable stateRequest is Clone, Send, Sync; caller passes &reqwest::Client
  • Removed AgentConfig from public API — all config lives in Request fields

0.7.0

  • Removed Agent structLlmClient is now the sole entry point; callers own the loop
  • Removed Memory traitInMemory, SlidingWindow, TokenSlidingWindow, LlmSummarizer removed
  • Removed AgentEvent / AgentInput — only LlmEvent remains
  • New LlmClient::complete() — native non-streaming API for all four providers
  • New CompleteResponse — content, reasoning, tool_calls, usage in one struct

0.6.0

  • Non-streaming complete() method on Provider trait
  • post_json helper for non-streaming HTTP POST with retry
  • CompleteResponse type

0.5.0

  • Agent API with chat(), send(), subscribe(), add_tool(), abort(), usage()
  • Concurrent tool execution via FuturesUnordered
  • SlidingWindow fix for orphaned tool messages
  • Default HTTP timeouts (10 s connect, 120 s response)

0.4.x

  • Initial multi-turn API
  • DeepSeek, OpenAI, Anthropic, Gemini providers
  • #[tool] and #[streaming] macros
  • Memory backends, MCP tool support

License

MIT OR Apache-2.0