Laminate
The missing data layer for AI applications in Rust — and everything else that touches messy JSON.
Why This Exists
Rust has excellent AI/ML inference libraries (candle, burn, ort) but no good way to handle the messy JSON that LLM APIs actually return. Anthropic stringifies tool call arguments. OpenAI streams fragments across dozens of SSE events. Ollama uses a different response shape entirely. Schema changes arrive without warning. And serde — Rust's serialization workhorse — fails on the first unexpected field.
We built laminate to solve this: a unified layer for consuming, normalizing, and dispatching LLM responses across providers. But solving that problem required solving the general problem of messy external data in Rust — and that general solution turned out to be just as valuable for REST APIs, config files, ETL pipelines, healthcare data, and logistics.
The serde maintainer himself explicitly called for this library in 2017: "I would love to see this explored in a different library specifically geared toward fault-tolerant partially successful deserialization." Ten years later, laminate is that library.
Laminate bonds layers of structure onto raw data — progressively, configurably, without breaking. Like physical lamination, each layer adds strength and rigidity. You can stop at any ply.
The Problem
LLM APIs return data that breaks serde. But so does everything else from the outside world:
// serde breaks on the first surprise
// serde_json::Value gives you no safety at all
let val: Value = from_str?;
let port = val.get?.as_u64? as u16; // no coercion, no path safety
There is nothing in between. Until now.
The Solution
use FlexValue;
// Parse once, extract with automatic type coercion
let config = from_json?;
let port: u16 = config.extract?; // "8080" → 8080 ✓
let debug: bool = config.extract?; // "true" → true ✓
let workers: i32 = config.extract?; // 4 → 4 ✓
Three lines. No per-field annotations. No custom deserializers. No #[serde(deserialize_with)] on every field. It just works.
Built for AI: LLM Response Handling
Laminate includes built-in adapters for Anthropic, OpenAI, and Ollama that normalize responses into a single type. For full-featured agent frameworks with dozens of providers, agent loops, and RAG, see Rig, llm, or llm-connector. Laminate's AI adapters are a lightweight convenience layer — the real value is the data shaping engine beneath them.
use AnthropicAdapter;
use ProviderAdapter;
let adapter = AnthropicAdapter;
let response = adapter.parse_response?;
// Same API regardless of which LLM provider you're using
let text = response.text; // all text content
let tool_calls = response.tool_uses; // all tool/function calls
let tokens = response.usage.output_tokens; // token usage
Stream responses with automatic tool call fragment assembly:
use ;
let mut stream = new;
for chunk in incoming_sse_bytes
Dispatch tool calls to typed handlers with automatic argument deserialization:
let mut registry = new;
registry.register;
// One call dispatches all tool uses from any provider's response
let results = registry.dispatch_all.await?;
And Everything Else: Universal Data Shaping
The same engine that handles LLM responses handles every other source of messy data in Rust.
Navigate Deep Structures
let api_response = from_json?;
// Dot-path + bracket-index navigation with coercion at every step
let content: String = api_response.extract?;
let tool_name: String = api_response.extract?;
Derive Macro: Typed Structs That Tolerate Messy Data
use Laminate;
let = from_json?;
assert_eq!; // coerced from "25"
assert_eq!; // coerced from "yes"
assert_eq!; // unknown field preserved!
assert_eq!; // unknown field preserved!
// Every coercion is recorded — nothing is silent
for d in &diagnostics
Three Modes: Progressive Strictness
| Mode | Unknown Fields | Coercion | Missing Fields | Use For |
|---|---|---|---|---|
| Lenient | Dropped | BestEffort (try everything) | Defaulted | API consumption, scraping, logs |
| Absorbing | Preserved in overflow | SafeWidening (safe conversions) | Error | Round-trip proxying, config editing |
| Strict | Error | Exact (types must match) | Error | Output construction, validation |
use CoercionLevel;
// Same data, different strictness
let val = from;
// BestEffort: "42" → 42 ✓
let count: i64 = val.with_coercion.extract?;
// Exact: "42" is a string, not an i64 → Error
let result: = val.with_coercion.extract;
assert!;
Schema Inference & Data Auditing
Infer a schema from data, then audit new data against it:
use InferredSchema;
// Learn the schema from 1000 records
let schema = from_values;
println!;
// Fields: name (String, required), age (Integer, 98% present), score (Float, nullable)
// Audit new data against the learned schema
let report = schema.audit;
println!;
// 3 violations: row 42 has age="old" (type mismatch), row 99 missing required 'name', ...
See Built for AI above for provider normalization, streaming, and tool call dispatch.
Locale-Aware Number Parsing
Laminate understands international number formats out of the box:
let val = from // European: 1,234.56
.with_coercion;
let amount: f64 = val.extract_root?; // 1234.56
// Also handles: "1'234.56" (Swiss), "1 234 567" (French/SI),
// "1_000" (Rust/Python), "0xFF" (hex), "$12.99" (currency),
// "2.5 kg" (units), "Mar 31, 2026" (dates)
SQL Data Sources
Query databases and get FlexValue rows with automatic type mapping:
use SqliteSource;
use DataSource;
let db = connect.await?;
let rows = db.query.await?;
for row in &rows
Type Detection: "What IS This String?"
use ;
let guesses = guess_type;
assert!; // 0.90 confidence
let guesses = guess_type;
assert!; // 0.98 confidence
let guesses = guess_type;
// → [(Integer, 0.95), (Float, 0.70), (Boolean, 0.30)]
Source-Aware Coercion
Tell laminate where data came from — it adjusts coercion automatically:
use ;
// CSV data: everything is strings — enable full coercion + pack detection
let val = from_json?
.with_source_hint;
let price: f64 = val.extract?; // "$12.99" → 12.99 (pack coercion)
let port: u16 = val.extract?; // "8080" → 8080 (string coercion)
Domain Packs
Six built-in domain packs, always compiled (no feature flags needed):
| Pack | What It Does |
|---|---|
| time | Detects 14+ date/time formats, converts to ISO 8601, batch column detection with US/EU disambiguation, HL7 v2 packed dates, GEDCOM 7.0 qualifiers |
| currency | Parses $12.99, €1.234,56, (¥500), 1'234 CHF — 30 currency codes, accounting negatives, locale-aware decimals |
| units | Parses 2.5 kg, 120 lbs 4 oz, 37.2°C — weight, length, temperature (°C↔°F↔K conversion), volume, time, data, nautical miles, UNECE/X12/DOD standard codes, pack-size notation, SI-prefixed units, weight qualifiers (gross/net/tare) |
| identifiers | Validates IBAN (MOD-97), credit cards (Luhn + BIN brand), ISBN-10/13, US SSN/EIN, US NPI, UK NHS Number, EU VAT, UUID, email, phone |
| geo | Parses decimal degrees, DMS, ISO 6709 coordinates, detects lat/lng vs lng/lat order, identifies geodetic datums (WGS84, JGD2011, CGCS2000) |
| medical | Converts 18 lab values between US (mg/dL) and SI (mmol/L) units with analyte-specific factors, normalizes pharmaceutical notation (mcg/µg/ug), parses HL7 v2 dates |
Features
[]
= "0.1" # Core: FlexValue, coercion, modes
= { = "0.1", = ["derive"] } # + #[derive(Laminate)]
= { = "0.1", = ["full"] } # Everything
# Optional: database sources
= { = "0.1", = ["sqlite"] }
| Feature | What It Adds |
|---|---|
core (default) |
FlexValue, path navigation, coercion engine, modes, diagnostics, 6 domain packs, type detection, source hints |
derive |
#[derive(Laminate)] + #[derive(ToolDefinition)] procedural macros |
streaming |
SSE parser, stream event handling, MessageSnapshot |
providers |
Anthropic, OpenAI, Ollama response normalization |
registry |
Typed handler dispatch for tool calls |
schema |
Schema inference and data auditing |
full |
All of the above |
chrono-integration |
Convert detected dates to chrono::NaiveDate / NaiveDateTime |
uom-integration |
Convert parsed units to uom type-safe SI quantities |
Why Not Just Use serde?
Laminate is a complement to serde, not a replacement. serde handles serialization brilliantly. Laminate handles the messy reality that comes before your carefully typed structs:
| Scenario | serde | laminate |
|---|---|---|
API returns "42" for an integer field |
❌ Error | ✅ Coerces to 42 with diagnostic |
| Unknown fields in response | ❌ Ignored or error | ✅ Preserved in overflow for round-trip |
| Missing optional field | ⚠️ Requires #[serde(default)] per field |
✅ Mode-level policy |
| Schema changed upstream | ❌ Hard failure | ✅ Absorb unknown, default missing, report all |
| Multiple error locations | ❌ Stops at first | ✅ Collects all diagnostics |
| Mixed types in array | ❌ Error | ✅ Element-level coercion |
The serde maintainer explicitly stated that fault-tolerant, partially-successful deserialization should be "a different library." This is that library.
Design Philosophy
Every transformation is auditable — laminate never silently changes your data. The diagnostic trail tells you exactly what was coerced, what was defaulted, what was dropped, and what was preserved.
coerced string → i64 at 'age' [Info]
defaulted field 'verified' (null → default) [Warning]
preserved unknown field 'theme' in overflow [Info]
overridden object → number at 'config' [Warning: nested data lost]
Progressive strictness means you start lenient and tighten over time. Ship fast with BestEffort, then review diagnostics, then progressively restrict. The same pipeline works for prototyping and production.
License
[License details]
Contributing
[Contributing guidelines]