Expand description
§instructors
Type-safe structured output extraction from LLMs.
Define a Rust struct, and instructors will make the LLM return data that
deserializes directly into it — with automatic schema generation, validation,
and retry on failure.
§Quick start
use instructors::prelude::*;
#[derive(Debug, Deserialize, JsonSchema)]
struct Contact {
name: String,
email: Option<String>,
phone: Option<String>,
}
let client = Client::openai("sk-...");
let result: ExtractResult<Contact> = client
.extract("Contact John Doe at john@example.com")
.model("gpt-4o")
.await?;
println!("{}: {:?}", result.value.name, result.value.email);
println!("tokens: {}, cost: {:?}", result.usage.total_tokens, result.usage.cost);§Validation
use instructors::prelude::*;
#[derive(Debug, Deserialize, JsonSchema)]
struct User {
name: String,
age: u32,
}
let client = Client::openai("sk-...");
// closure-based validation
let user: User = client.extract("...")
.validate(|u: &User| {
if u.age > 150 { Err("age unrealistic".into()) } else { Ok(()) }
})
.await?.value;§Features
- Multi-provider — OpenAI (
response_formatstrict), Anthropic (tool_use), Google Gemini (response_schema), plus any compatible API - List extraction —
extract_many::<T>()returnsVec<T> - Batch processing —
extract_batch::<T>()with configurable concurrency - Multi-turn —
.messages()for conversation history - Validation — closure-based
.validate()or trait-based.validated() - Lifecycle hooks —
.on_request()/.on_response() - Streaming — SSE streaming via
.on_stream()callback - Images —
.image()/.images()for vision models - Provider fallback —
.with_fallback()for auto-failover - JSON repair — automatic repair of malformed LLM output (trailing commas, single quotes, etc.) before retry
- Retry backoff — exponential backoff on 429/503 via
.retry_backoff() - Request timeout — overall timeout via
.timeout() - Cost tracking — token counting and cost estimation via
tiktoken(optional) - Tracing — structured logging via
tracing(optional feature)
Re-exports§
pub use serde;
Modules§
- prelude
- Common imports for working with instructors.
Structs§
- Backoff
Config - Configuration for exponential backoff on retryable HTTP errors (429, 503).
- Batch
Builder - Builder for concurrent batch extraction.
- Client
- LLM client for structured data extraction.
- Extract
Builder - Builder for configuring an extraction request.
- Extract
Result - Result of a successful extraction containing the typed value and usage info.
- Message
- Usage
- Token usage and cost information from an extraction request.
- Validation
Error - Validation error with a human-readable message that gets fed back to the LLM on retry.
Enums§
- Error
- Image
Input - Image input for vision-capable models.
Traits§
- Json
Schema - A type which can be described as a JSON Schema document.
- Validate
- Trait for types that can validate themselves after extraction.
Type Aliases§
Derive Macros§
- Json
Schema - Derive macro for
JsonSchematrait.