Expand description
§unia - Universal AI Client Library
A small, pragmatic Rust library providing a provider-agnostic LLM client architecture with a fully generic options system.
§Features
- Async-first, tokio compatible
- Provider-agnostic trait-based design
- Generic model and transport options
- Streaming support via Server-Sent Events
- Type-safe request/response models
§Architecture
The library uses a factory-based design:
- Providers act as factories to create Clients.
- Clients store authentication and configuration state.
- Agents wrap Clients to provide automatic tool execution loops.
§Core Types
Provider: Factory trait for creating clients.Client: Trait for making requests to LLM providers.Agent: High-level orchestration for multi-turn conversations and tool use.ModelOptions: Model behavior parameters (temperature, max_tokens, etc.)TransportOptions: Transport configuration (timeout, proxy, etc.)Message: Individual conversation messages with role and content
§Example
use unia::client::Client;
use unia::model::{Message, Part};
use unia::providers::{OpenAI, Provider};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create client using the factory
let client = OpenAI::create("your-api-key".to_string(), "gpt-5".to_string());
// Create a message with text content
let messages = vec![
Message::User(vec![
Part::Text {
content: "Hello!".to_string(),
finished: true,
}
])
];
// Send request
let response = client.request(messages, vec![]).await?;
println!("{:?}", response);
Ok(())
}Re-exports§
pub use agent::Agent;pub use client::Client;pub use client::ClientError;pub use client::StreamingClient;pub use mcp::AttachResources;pub use mcp::MCPServer;pub use model::GeneralRequest;pub use model::Message;pub use model::Response;pub use tools::ToolError;pub use tools::ToolService;pub use rmcp;
Modules§
- agent
- Agent struct for automatic tool execution with LLM providers.
- api
- client
- Core client trait and error types.
- http
- HTTP client utilities for making requests to LLM APIs.
- mcp
- model
- Common data models for provider-agnostic LLM requests and responses.
- options
- Generic options structures for model and transport configuration.
- providers
- LLM provider implementations.
- sse
- Server-Sent Events (SSE) stream processing utilities.
- stream
- Streaming support types and utilities.
- tools
- Tool system for automatic function calling with typed input/output.
Structs§
- Tool
- A tool that can be used by a model.