UniStructGen
Rust toolkit for type-safe code generation, AI tool calling, structured LLM outputs, and compiler-driven agents.
Parse JSON, OpenAPI, SQL, GraphQL, Markdown, or .env schemas into a language-agnostic intermediate representation (IR), then generate idiomatic Rust structs, JSON Schema for LLM structured outputs, or wire up AI tool calling -- all with compile-time safety.
Author: Maxim Bogovic Version: 0.1.0 License: MIT / Apache-2.0 Rust: 1.70+
Why developers use UniStructGen
- Ship types fast — generate Rust structs from real JSON and schemas at compile time.
- Keep LLM tools correct — auto‑generate JSON Schemas and tool definitions from Rust functions.
- Reduce boilerplate — one source of truth for types, validation, and docs.
Try the killer example:
What Problem Does This Solve
You have data schemas -- JSON payloads, database DDL, OpenAPI specs, GraphQL types, environment variables. You need Rust structs that match. You also need JSON Schema to tell an LLM exactly what shape of response you expect. And you need to turn plain Rust functions into tools the LLM can call.
UniStructGen gives you one pipeline for all of this:
Schema (JSON/SQL/OpenAPI/GraphQL/.env/Markdown)
|
v
Parser --> IR (Intermediate Representation) --> Generator
| |
v v
Rust structs or JSON Schema (Draft 2020-12)
Instead of hand-writing struct definitions, JSON Schema, serde attributes, and tool boilerplate, you describe the shape once and generate everything.
Project Status
Stable core: core/, codegen/, parsers/*, proc-macro/, cli/ are the primary developer-facing surface and should remain backward compatible within minor versions.
Experimental/optional: llm/, mcp/, agent/, and schema-registry/ are evolving and may change more frequently.
Compile-time fetch controls: set UNISTRUCTGEN_FETCH_OFFLINE=1 to disable network, UNISTRUCTGEN_FETCH_CACHE=0 to disable caching, UNISTRUCTGEN_FETCH_CACHE_DIR=/path to override cache location, and UNISTRUCTGEN_FETCH_TIMEOUT_MS=... to override timeouts.
Table of Contents
- Quick Start
- Killer Example (60 Seconds)
- Core Feature:
#[ai_tool]Macro - Core Feature: JSON Schema for Structured LLM Outputs
- Core Feature: AI Validation Loop
- Core Feature: Compiler Diagnostics for AI Agents
- Core Feature: Compile-Time API Fetching
- Core Feature: LLM Client Abstraction
- All 6 Parsers
- Builder API
- Pipeline API
- CLI
- Architecture
- Crate Map
- Type Mapping Reference
- Examples
- Blog
- License
Quick Start
Add the crates you need to Cargo.toml:
[]
# Core IR types, traits, ToolRegistry, validation, Context
= "0.1"
# Rust code renderer + JSON Schema generator
= "0.1"
# Proc macros: generate_struct_from_json!, #[ai_tool], openapi_to_rust!, etc.
= "0.1"
# LLM clients (OpenAI, Ollama) with structured output support
= "0.1"
# Parsers -- pick what you need
= "0.1"
= "0.1"
= "0.1"
# These parsers exist but are used primarily via proc-macros:
# unistructgen-sql-parser, unistructgen-graphql-parser, unistructgen-env-parser
Minimal example -- generate a Rust struct from JSON at compile time:
use generate_struct_from_json;
generate_struct_from_json!
// Now `User` struct exists with fields: id (i64), name (String), tags (Vec<String>)
// Derives: Debug, Clone, PartialEq, Serialize, Deserialize
Killer Example (60 Seconds)
One small program that shows the core value: types + tool schemas + safe execution.
What it demonstrates:
- Compile-time Rust types from JSON
- LLM tool schema generation from functions
- Safe, structured tool execution
See: examples/killer-example/README.md
Core Feature: #[ai_tool] Macro
Turn any Rust function into an LLM-callable tool with a single attribute. The macro generates a JSON Schema from the function signature, creates a tool struct implementing AiTool, and handles JSON argument deserialization.
use ai_tool;
use ;
/// Calculate shipping cost based on weight and destination
// The macro generates `CalculateShippingTool` struct implementing `AiTool`
async
What #[ai_tool] generates
Given a function fn calculate_shipping(weight_kg: f64, destination: String) -> f64:
- JSON Schema (Draft 2020-12) derived from the Rust types via the IR type system
CalculateShippingToolstruct implementing theAiTooltrait- Argument deserialization struct with
serde::Deserialize - Description extracted from the function's
///doc comment
Dependency injection with #[context]
Tools can access shared resources (database pools, API clients) via the Context container:
/// Get user balance from database
async
// Setup
let mut context = new;
context.insert;
let mut registry = new;
registry.register;
// Execute -- Context provides the DbPool, LLM provides user_id
let result = registry.execute.await?;
Parallel batch execution
use ToolCall;
let calls = vec!;
let results = registry.execute_batch.await;
// Returns Vec<(String, ToolResult)> -- all executed concurrently
Supported argument types
| Rust Type | JSON Schema Type | Notes |
|---|---|---|
String, &str |
"string" |
|
i8, i16, i32 |
"integer" |
|
i64, isize |
"integer" |
|
f32, f64 |
"number" |
|
bool |
"boolean" |
|
Vec<T> |
"array" |
Recursive |
Option<T> |
nullable | Recursive |
Core Feature: JSON Schema for Structured LLM Outputs
Generate Draft 2020-12 JSON Schema from any IR module. Use it as a contract for OpenAI response_format.json_schema or inject into system prompts for Ollama.
use ;
use JsonSchemaRenderer;
use CodeGenerator;
// Define the response structure
let module = new
.name
.field
.field
.field
.field
.build_ir_module;
// Generate JSON Schema
let renderer = new.fragment;
let schema = renderer.generate?;
Output:
Using with OpenAI
use ;
use OpenAiClient;
// Reads OPENAI_API_KEY from environment
let client = new?;
let schema_value: Value = from_str?;
let response = client.complete.await?;
// Response is guaranteed to match the AgentResponse schema
Schema features
$defswith$reffor nested types and cross-references- Recursive type support
- Strict mode (
additionalProperties: false) for OpenAI compatibility - Fragment mode (
.fragment()) omits$schemafor embedding in larger payloads - All IR types mapped: primitives,
Option<T>,Vec<T>,HashMap<K,V>, named references, enums as string unions
Core Feature: Reverse IR (Rust -> IR -> Schema)
Define your types in Rust and generate the IR/Schema from them. This is the reverse of the standard flow, allowing you to use Rust as the Source of Truth.
use IntoIR;
use JsonSchemaRenderer;
// Get the IR definition at runtime
let definition = ir_definition.unwrap;
// Wrap in a module
let mut module = new;
module.add_type;
// Generate JSON Schema
let schema = new.generate?;
Supported #[field] attributes:
doc = "...": Overrides/adds documentationmin_length,max_length: String/array length constraintsmin_value,max_value: Numeric range constraintspattern = "...": Regex patternformat = "...": Format string (e.g., "email", "date-time")optional: Force optionality in IR
Core Feature: AI Validation Loop
LLMs produce malformed JSON. UniStructGen provides structured validation errors and auto-generated correction prompts to send back to the LLM for self-healing.
use ;
let mut response_json = llm_client.complete.await?;
for attempt in 0..3
Validation types
| Type | Purpose |
|---|---|
AiValidationError |
Structured error with path, message, invalid_value, correction_hint |
ValidationReport |
Aggregates errors; generates correction prompts via to_correction_prompt() |
map_serde_error() |
Converts serde_json::Error to AiValidationError with field path extraction |
AiValidatable trait |
For types that can self-validate: fn validate_ai(&self) -> ValidationReport |
Core Feature: Compiler Diagnostics for AI Agents
Build AI agents that write Rust code and iterate on compiler errors. The diagnostics module parses structured output from cargo check --message-format=json.
use CargoDiagnostics;
use Path;
// Run cargo check on a project directory
let errors = check?;
for error in &errors
// Feed errors back to AI for correction
if !errors.is_empty
Code patching from LLM output
The patch module provides CodeFix and Hunk structs for applying LLM-generated code fixes:
use CodeFix;
// LLM can output structured fixes as JSON
let fix: CodeFix = from_str?;
// fix.file_path, fix.explanation, fix.changes (Vec<Hunk>)
let fixed_code = fix.apply?;
Core Feature: Compile-Time API Fetching
Fetch a JSON API at compile time and generate type-safe structs. No manual type definitions. No codegen scripts.
use struct_from_external_api;
struct_from_external_api!
// GithubRepo struct is now available with all fields from the API response
Authentication methods
| Parameter | Format | Protocol |
|---|---|---|
auth_bearer = "token" |
Bearer token | OAuth2, JWT |
auth_api_key = "X-API-Key:value" |
Custom header | API key |
auth_basic = "user:password" |
HTTP Basic Auth | Basic |
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
struct_name |
string | "ApiResponse" |
Name of the generated struct |
url_api / url |
string | required | API endpoint URL |
method |
string | "GET" |
HTTP method (GET, POST, PUT, DELETE) |
serde |
bool | true |
Add Serialize/Deserialize derives |
default |
bool | false |
Add Default derive |
optional |
bool | false |
Make all fields Optional |
max_depth |
int | unlimited | Limit nested object depth |
max_entity_count |
int | unlimited | Limit array items used for inference |
timeout |
int | 30000 |
Request timeout in ms |
Core Feature: LLM Client Abstraction
Unified async trait for OpenAI and Ollama with built-in structured output support.
use ;
// OpenAI (reads OPENAI_API_KEY from env)
use OpenAiClient;
let openai = new?;
// Ollama (local, defaults to http://localhost:11434)
use OllamaClient;
let ollama = new;
// Factory with auto-detection
use ;
let client = new
.with_provider // OpenAI if key exists, else Ollama
.with_model
.build?;
LlmClient trait
CompletionRequest fields
| Field | Type | Description |
|---|---|---|
messages |
Vec<Message> |
Conversation messages (system, user, assistant) |
temperature |
Option<f32> |
Sampling temperature |
max_tokens |
Option<u32> |
Max response tokens |
response_schema |
Option<Value> |
JSON Schema for structured output |
Structured output per provider
- OpenAI: Uses
response_format.json_schemawithstrict: true(native API support) // Ollama: Enablesformat: "json"and injects the schema into the system prompt
Core Feature: MCP Server (Model Context Protocol)
Turn your Rust functions into an MCP Server compatible with Claude Desktop, Cursor, and Windsurf in one line.
use ai_tool;
use ;
use serve_stdio;
use Arc;
async
This automatically implements the Model Context Protocol:
tools/list: Exports your tools with full JSON Schema definitionstools/call: Executes your Rust functions with arguments provided by the LLMinitialize: Handles handshake and capabilities
Supported transports:
serve_stdio: For local agents (Claude Desktop, IDEs)serve_sse: For remote/web agents (requiresssefeature)
Core Feature: Agent Runtime & Orchestration
Build autonomous agents and pipelines directly in Rust. The runtime handles the ReAct loop (Reasoning + Acting), tool execution, and context management.
use ;
use ToolRegistry;
use ai_tool;
use Arc;
// 1. Define Tools
// 2. Build Agent
let researcher = builder
.name
.client
.tools
.system_prompt
.build?;
// 3. Run (Auto-loop: Thought -> Action -> Observation -> Thought)
let answer = researcher.run.await?;
Multi-Agent DAG Pipeline
Chain agents together to solve complex tasks.
let pipeline = builder
.agent
.agent
.agent
.start
.transition
.transition
.build?;
let result = pipeline.run.await?;
All 6 Parsers
UniStructGen includes parsers for 6 input formats. Each implements the Parser trait and produces IRModule.
1. JSON
// Proc macro (compile-time)
generate_struct_from_json!
// Runtime pipeline
use ;
let mut parser = new;
let ir = parser.parse?;
Smart type inference detects: DateTime, UUID, Email, URL patterns in string values.
2. OpenAPI / Swagger
// Proc macro
openapi_to_rust!
// Also supports: spec = "inline yaml...", url = "https://..."
Client generation is a typed scaffold (best-effort) and may require manual adjustments for edge cases.
3. SQL DDL
generate_struct_from_sql!
4. GraphQL Schema
generate_struct_from_graphql!
5. .env Files
generate_struct_from_env!
// Generates: AppConfig { database_url: String, port: i64, debug: bool }
6. Markdown Tables
// Runtime via MarkdownParser or CLI:
// unistructgen generate --input schema.md --name Config
The markdown parser also includes a semantic chunker for RAG pipelines:
use SemanticChunker;
let markdown = read_to_string?;
let chunks = chunk;
// Each chunk preserves heading hierarchy and semantic boundaries
Builder API
Build IR structs and enums programmatically with a fluent API, then generate Rust code or JSON Schema.
use ;
// Struct
let code = new
.name
.doc
.field
.field
.field_optional
.field_with
.with_serde
.with_default
.generate?;
// Enum
let code = new
.name
.variant
.variant_with_rename
.variant
.with_serde
.generate?;
// Module with multiple types
let code = new
.add_struct
.add_enum
.generate?;
Field constraints
Constraints generate #[validate(...)] attributes on the rendered Rust struct:
new
.optional
.doc
.rename // #[serde(rename = "user_email")]
.length // #[validate(length(min = 5, max = 255))]
.pattern // #[validate(regex = "...")]
.format // #[validate(email)]
.build;
Quick JSON parsing
use from_json;
let code = from_json
.struct_name
.with_serde
.generate?;
Pipeline API
Chain a parser, transformers, and generator into a processing pipeline:
use ;
use ;
use ;
let mut pipeline = new
.add_transformer;
let rust_code = pipeline.execute?;
Built-in transformers
| Transformer | Effect |
|---|---|
FieldOptionalizer |
Wraps all fields in Option<T> |
DocCommentAdder |
Adds doc comments to structs/fields |
TypeDeduplicator |
Deduplicates identical nested struct definitions |
FieldRenamer |
Renames fields (e.g. snake_case conversion) |
Plugin system
Plugins hook into the pipeline at parse and generate stages:
use ;
let mut plugins = new;
plugins.register?;
let input = plugins.before_parse?;
let module = plugins.after_parse?;
let code = plugins.after_generate?;
CLI
# Generate Rust structs from JSON
# Generate from Markdown table
# Generate HTTP client scaffold from OpenAPI spec
# AI-powered error fixing (experimental)
Architecture
+--------------------------------------------------------------+
| UniStructGen |
| |
+----------+ | +--------+ +----+ +-------------+ +----------+ |
| JSON |--+ | | | | | | | | Rust | |
| OpenAPI |--+ | | | | | | | | Code | |
| SQL |--+ | | Parser |--->| IR |--->| Transformer |--->| JSON | |
| GraphQL |--+->| | | | | | | | Schema | |
| .env |--+ | | | | | | | | | |
| Markdown|--+ | +--------+ +----+ +-------------+ +----------+ |
+----------+ | [Plugins] [Plugins] |
+--------------------------------------------------------------+
|
+-----------+-----------+
v v v
+------------+ +--------+ +----------+ +--------+
| #[ai_tool] | | LLM | |Validation| | MCP |
| ToolRegist | | Client | | Loop | | Server |
| JSON Schema| |OpenAI | | Reports | | stdio/ |
+------------+ |Ollama | | Prompts | | sse |
+--------+ +----------+ +--------+
Core traits
| Trait | Module | Purpose | Implementations |
|---|---|---|---|
Parser |
core::parser |
Input format to IR | JsonParser, OpenApiParser, MarkdownParser, SqlParser, GraphqlParser, EnvParser |
CodeGenerator |
core::codegen |
IR to output code | RustRenderer, JsonSchemaRenderer |
IRTransformer |
core::transformer |
Transform IR in-flight | FieldOptionalizer, DocCommentAdder, TypeDeduplicator, FieldRenamer |
Plugin |
core::plugin |
Pipeline hooks | LoggingPlugin, HeaderPlugin, custom |
AiTool |
core::tools |
LLM tool interface | Auto-generated by #[ai_tool] |
LlmClient |
llm |
LLM provider abstraction | OpenAiClient, OllamaClient |
AiValidatable |
core::validation |
Self-validation for AI | Custom types |
IRVisitor |
core::visitor |
IR traversal/analysis | StructNameCollector, FieldCounter, IRValidator |
IR type system
// core::ir -- the shared representation all parsers emit and all generators consume
IRModule
Struct | Enum
IRStruct
IRField
Primitive // String, I32, I64, F64, Bool, DateTime, Uuid, etc.
Option // Option<T>
Vec // Vec<T>
Named // Reference to another struct/enum
Map // HashMap<K, V>
FieldConstraints
Crate Map
unistructgen/
├── core/ # unistructgen-core
│ └── src/
│ ├── lib.rs # Re-exports all public API
│ ├── ir.rs # IRModule, IRStruct, IRField, IRTypeRef, PrimitiveKind
│ ├── api.rs # StructGen, EnumGen, ModuleGen, FieldBuilder, FieldType
│ ├── parser.rs # Parser trait, ParserExt
│ ├── codegen.rs # CodeGenerator trait, MultiGenerator
│ ├── transformer.rs # IRTransformer trait + 4 built-in transformers
│ ├── pipeline.rs # Pipeline, PipelineBuilder
│ ├── plugin.rs # Plugin trait, PluginRegistry
│ ├── visitor.rs # IRVisitor trait, walk_* functions
│ ├── tools.rs # AiTool trait, ToolRegistry, ToolCall
│ ├── context.rs # Context (type-safe dependency injection)
│ ├── validation.rs # AiValidationError, ValidationReport, map_serde_error
│ ├── diagnostics.rs # CargoDiagnostics, CompilerError
│ ├── patch.rs # CodeFix, Hunk (LLM code patching)
│ └── error.rs # Error types
│
├── codegen/ # unistructgen-codegen
│ └── src/
│ ├── lib.rs # RustRenderer, RenderOptions
│ ├── json_schema.rs # JsonSchemaRenderer (Draft 2020-12)
│ └── builder.rs # RustRendererBuilder
│
├── parsers/
│ ├── json_parser/ # unistructgen-json-parser
│ ├── openapi_parser/ # unistructgen-openapi-parser
│ ├── markdown_parser/ # unistructgen-markdown-parser (+ SemanticChunker)
│ ├── sql_parser/ # unistructgen-sql-parser
│ ├── graphql_parser/ # unistructgen-graphql-parser
│ └── env_parser/ # unistructgen-env-parser
│
├── proc-macro/ # unistructgen-macro
│ └── src/
│ ├── lib.rs # 8 macros: generate_struct_from_json!, #[json_struct],
│ │ # struct_from_external_api!, openapi_to_rust!,
│ │ # generate_struct_from_sql!, generate_struct_from_graphql!,
│ │ # generate_struct_from_env!, #[ai_tool]
│ └── ai_tool.rs # ai_tool macro implementation
│
├── llm/ # unistructgen-llm
│ └── src/
│ ├── lib.rs # LlmClient trait, CompletionRequest, Message
│ ├── openai.rs # OpenAiClient
│ ├── ollama.rs # OllamaClient
│ └── factory.rs # LlmClientFactory, Provider enum
│
├── mcp/ # unistructgen-mcp
│ └── src/
│ ├── lib.rs # MCP Server exports (serve_stdio, serve_sse)
│ ├── protocol.rs # JSON-RPC & MCP types
│ ├── server.rs # Core MCP logic
│ ├── stdio.rs # Stdio transport
│ └── sse.rs # SSE transport (optional)
│
├── agent/ # unistructgen-agent
│ └── src/
│ ├── lib.rs # Agent & Pipeline exports
│ ├── agent.rs # ReAct loop implementation
│ └── pipeline.rs # DAG orchestration
│
├── cli/ # unistructgen
│ └── src/
│ ├── main.rs # generate, client, fix commands
│ └── commands/ # Command implementations
│
└── examples/
├── tools-agent/ # AI tool registry + batch execution demo
├── docu-agent/ # RAG ingestion + JSON Schema + validation loop
├── code-agent/ # Compiler-driven AI coding loop
├── github-client/ # GitHub API client from OpenAPI
├── blog-api/ # Blog API types from OpenAPI
├── api-example/ # Struct generation from live API
└── proc-macro-example/ # All proc macros demonstrated
Type Mapping Reference
How IR types map across parsers and generators:
| IR Type | Rust Output | JSON Schema Output | Source: JSON | Source: SQL | Source: GraphQL |
|---|---|---|---|---|---|
String |
String |
"string" |
string values | VARCHAR, TEXT |
String, ID |
I32 |
i32 |
"integer" |
small ints | INT, INTEGER |
Int |
I64 |
i64 |
"integer" |
large ints | BIGINT, SERIAL |
-- |
F64 |
f64 |
"number" |
floats | DOUBLE, REAL |
Float |
Bool |
bool |
"boolean" |
booleans | BOOLEAN |
Boolean |
DateTime |
chrono::DateTime<Utc> |
"string" format:"date-time" |
ISO 8601 strings | TIMESTAMP |
-- |
Uuid |
uuid::Uuid |
"string" format:"uuid" |
UUID strings | UUID |
-- |
Decimal |
rust_decimal::Decimal |
"number" |
-- | DECIMAL, NUMERIC |
-- |
Option(T) |
Option<T> |
omitted from required |
-- | nullable columns | nullable fields |
Vec(T) |
Vec<T> |
"array" |
arrays | -- | [Type] |
Map(K,V) |
HashMap<K,V> |
"object" + additionalProperties |
dynamic objects | -- | -- |
Named(S) |
S |
"$ref": "#/$defs/S" |
nested objects | -- | type references |
Examples
| Example | What It Demonstrates |
|---|---|
tools-agent |
Register functions as AI tools, batch execution, dependency injection via Context, LlmClientFactory |
docu-agent |
RAG ingestion with SemanticChunker, JSON Schema contract, AI validation loop with auto-correction |
code-agent |
Compiler-driven development: AI writes code, CargoDiagnostics checks, errors fed back, AI fixes iteratively |
github-client |
GitHub API client scaffold generated from OpenAPI spec |
blog-api |
Blog API types from OpenAPI |
api-example |
Struct generation from live API responses with struct_from_external_api! |
proc-macro-example |
All proc macros: JSON, OpenAPI, SQL, GraphQL, .env |
killer-example |
Types + LLM tool schema + safe execution in one file |
Blog
docs/blog/announcing-unistructgen.md
Development
# Check all workspace crates
# Run all tests
# Run tests for a specific crate
# Build release
# Run CLI in dev
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT License (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.