struct-llm
A lightweight, WASM-compatible Rust library for generating structured outputs from LLMs using a tool-based approach. Inspired by Pydantic AI and luagent.
Features
- π― Structured Outputs: Type-safe, validated LLM responses using JSON schema and tool calling
- π Provider-Independent: Works with any API supporting tool/function calling (OpenAI, Anthropic, local models)
- π‘ Streaming Compatible: Tool-based approach works seamlessly with streaming responses
- π¦ Type-Safe: Leverages Rust's type system with serde integration
- πΈοΈ WASM-Ready: Synchronous API, no async/await required in the library itself
- πͺΆ Lightweight: Minimal dependencies, you bring your own HTTP client
- π§ Flexible: Use derive macros for convenience or implement traits manually
Why Tool-Based Structured Outputs?
Instead of relying on provider-specific features like OpenAI's response_format, this library uses a universal tool calling approach:
- Your output schema is registered as a special
final_answertool - The LLM calls this tool when ready to return structured data
- The library validates and deserializes the tool call arguments
Benefits:
- β Works with streaming (tool calls can be streamed)
- β Provider-independent (any model supporting tool calling)
- β Mix structured output with regular tools
- β More reliable than parsing raw JSON from text
Quick Start
See the examples directory for complete working examples!
use ;
use ;
// Define your output structure
// Get tool definition and build request that ENFORCES the tool call
let tool = tool_definition;
let messages = vec!;
let mut request = build_enforced_tool_request;
request = "gpt-4o-mini".into;
// Your async code makes the HTTP request
let response = your_api_client
.post
.json
.send
.await?;
// Extract and validate the structured response (sync)
let tool_calls = extract_tool_calls?;
let result: SentimentAnalysis = parse_tool_response?;
println!;
println!;
Key insight: build_enforced_tool_request() ensures the LLM must call your tool (like pydantic AI / luagent), guaranteeing you always get structured output back.
Architecture
This library is designed to be a utility layer that you integrate into your existing async code:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Your Application (async/await) β
β - Makes HTTP requests (reqwest, ureq, etc.) β
β - Handles API keys, retries, rate limiting β
β - Manages conversation state β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β struct-llm (sync utilities) β
β - Converts Rust types to JSON Schema β
β - Builds tool definitions for API requests β
β - Parses tool calls from responses β
β - Validates and deserializes tool arguments β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Core Components
1. StructuredOutput Trait
Defines how a type can be used as a structured LLM output:
2. Derive Macro (Ergonomic API)
use StructuredOutput;
3. Provider Adapters
Handle API-specific formatting differences:
4. Tool Call Parsing
Extract tool calls from various response formats:
/// Extract tool calls from API response text
/// Parse and validate a specific tool call
Usage Examples
Basic Structured Output
use ;
async
Streaming Support
use ;
async
Custom Schema (No Derive Macro)
Mixing Regular Tools with Structured Output
// Define regular tools
let tools = vec!;
// The LLM can call regular tools first, then the final_answer tool
let response = call_api_with_tools.await?;
let tool_calls = extract_tool_calls?;
// Handle each tool call
for tool_call in tool_calls
WASM Compatibility
The library is designed to work in WASM environments:
// No async/await in the library itself
// No file system access
// No std-only dependencies
use *;
Comparison to Alternatives
| Feature | struct-llm | raw JSON parsing | provider-specific APIs |
|---|---|---|---|
| Streaming | β Yes | β No | β οΈ Sometimes |
| Provider-independent | β Yes | β οΈ Manual | β No |
| Type-safe | β Yes | β No | β Yes |
| WASM-compatible | β Yes | β Yes | β οΈ Varies |
| Mix with regular tools | β Yes | β No | β οΈ Sometimes |
| Validation | β Automatic | β Manual | β Automatic |
Error Handling
Roadmap
v0.1 - Core Functionality
-
StructuredOutputtrait - Derive macro for
StructuredOutput - Provider adapters (OpenAI, Anthropic, Local)
- Tool call extraction and parsing
- JSON Schema generation from Rust types
- Basic validation
v0.2 - Streaming & Ergonomics
- Streaming parser for incremental responses
- Schema attributes (
#[schema(description = "...")]) - Helper functions for common patterns
- Better error messages
v0.3 - Advanced Features
- Schema caching for performance
- Custom validators
- Tool execution framework (optional)
- Conversation state helpers
Design Philosophy
- Utility Layer: You handle HTTP, we handle schemas and parsing
- Type Safety: Leverage Rust's type system, not runtime magic
- WASM First: No async, no filesystem, pure data transforms
- Bring Your Own Client: Works with reqwest, ureq, or fetch API
- Simple & Focused: Does one thing well - structured outputs
Contributing
This library is in early development. Contributions welcome!
- Keep the API synchronous (no async in the library)
- Maintain WASM compatibility
- Add tests for new features
- Document provider-specific quirks
License
MIT OR Apache-2.0
Inspiration
- Pydantic AI - Python agent framework with structured outputs
- luagent - Lua agent library using tool-based outputs
- instructor - Structured outputs for OpenAI