⚠️ WARNING: This is a pre-release version with an unstable API. Breaking changes may occur between versions. Use with caution and pin to specific versions in production applications.
Design Philosophy
This library offers an opinionated feature set, rather than trying to be a general-purpose LLM client.
- Type Safety vs. TTFT: Streaming is not supported. We explicitly prioritize type safety and validation completeness over Time To First Token (TTFT). You get a valid struct or an error, never a partial state.
- Alternatives: For a more general-purpose library that supports streaming, disparate providers, and conversational features, consider Rig.
Available Providers
| Provider | API Type | Notes |
|---|---|---|
| OpenAI | Responses API | Uses the /responses endpoint for structured interactions. |
| OpenRouter | Responses API | Uses the /responses endpoint, supporting a wide range of models. |
Quick Start
use ;
let analysis = with
.api_key?
.model
.messages
.
.await?;
Structured Generation
The #[completion_schema] macro automatically adds the necessary derives (Deserialize, JsonSchema) and attributes for structured output. It supports:
- Structs
- Enums (Unit, Tuple, and Struct variants)
let status = with
.api_key?
.model
.messages
.
.await?;
Note: The library automatically handles provider-specific requirements (e.g., wrapping non-object types for OpenAI).
Text Generation
For plain text, use TextResponse.
use ;
let response = with
// ... configuration ...
.
.await?;
println!;
See examples/ for more runnable examples.
Known Issues
- ..
License
This project is licensed under the MIT License - see the LICENSE file for details.