AutoModel — SQL-first Reverse ORM for Rust, Built for the greater DX and for the AI Era
Why AutoModel
Database access in Rust typically falls into two camps: ORMs (Diesel, SeaORM) and compile-time checked SQL (sqlx). Both have trade-offs that become sharply worse when an AI assistant — or any automated tool — is working with your code, and humans are exposed to far more intense code reviewing cycle.
AutoModel is different: you write plain SQL, and the tool generates real Rust source files.
queries/users/get_user.sql → src/generated/users.rs (checked into git)
- Human or AI can read everything. Generated structs, function signatures, error enums, and type aliases — all corresponding to the actual database schema, including constraints exposed as structured Rust enums — are ordinary
.rsfiles sitting in your repo. An LLM can inspect them, reason about types, and produce correct calling code on the first try — no PostgreSQL agent, no database connection, no special tooling required. - Plain SQL stays plain SQL. Your queries are
.sqlfiles with full syntax highlighting. There is no query builder to learn, no expression DSL. Any valid PostgreSQL query works — window functions, CTEs, recursive queries, lateral joins, subqueries, aggregations,UNNESTbatch inserts, partitioned tables, domain types, composite types, conditional clauses — all features of SQL, with no restrictions. - Build-time code generation, not compile-time magic.
build.rsconnects to the database once, extracts types from prepared statements, and writes.rsfiles — the whole step takes seconds, not the minutes of a full application compile. After that, builds are fully offline. CI can verify that generated code is up-to-date without a live database. - Diff-friendly and reviewable. Because the generated code is committed, pull request reviewers (human or AI) see exactly what changed — a renamed field, a new column, a constraint added. Nothing is hidden inside macro expansion.
- Built-in query analytics. During code generation, AutoModel runs
EXPLAINon every query. Every generated function includes the query plan in its doc comments, and a warnings file is committed to the repo flagging sequential scans (missing indexes) and multi-partition access on partitioned tables. Warnings are surfaced during build time and visible at review time — reviewers (human or AI) catch performance problems before they reach production. Analysis can be opted out per query. - Feature-rich control over generated code. Struct reuse and deduplication across queries. Diff-based conditional updates for the load → transform → save pattern. Custom struct naming for cleaner, domain-specific APIs. Automated
multiunzipsupport combined withUNNESTfor batch inserts. Strongly typed mappings forjson/jsonbcolumns. Full support for composite types and whole-record column insertion and selection — and much more. - Less code to write, review, and test. The glue between SQL and Rust — structs, parameter binding, error enums, type conversions — is an entire class of code that no human needs to write, review, or maintain. It is machine-generated from your SQL and the database schema. Reviewers focus on the
.sqlfile and the business logic that calls it. This directly translates to faster development cycles: adding a new query is a single.sqlfile, and you have a strongly typed Rust function to call on the next build.
The result: a workflow where SQL is the source of truth, types are real files, every tool in the ecosystem — IDE, AI, CI, code review — can see the full picture, and development moves faster because an entire layer of boilerplate is eliminated.
Project Structure
This is a Cargo workspace with three main components:
automodel-lib/- The core library for generating typed functions from SQL queriesautomodel-cli/- Command-line interface with advanced featuresexample-app/- An example application that demonstrates build-time code generation
Quick Start
1. Add to your Cargo.toml
[]
= "0.9"
[]
= "0.9"
= { = "1.0", = ["rt"] }
2. Create a build.rs
async
3. Write SQL queries
Create a queries/ directory and add .sql files organized by module:
my-project/
├── queries/
│ └── users/
│ ├── get_user_by_id.sql
│ ├── create_user.sql
│ └── update_user_profile.sql
├── build.rs
└── src/
└── main.rs
Each SQL file contains an optional metadata block followed by the query:
-- @automodel
-- description: Retrieve a user by their ID
-- expect: exactly_one
-- @end
SELECT id, name, email, created_at
FROM users
WHERE id = #{id}
A more advanced example with conditional updates and custom types:
-- @automodel
-- description: Update user profile with conditional name/email
-- expect: exactly_one
-- conditions_type: true
-- types:
-- profile: "crate::models::UserProfile"
-- @end
UPDATE users
SET profile = #{profile}, updated_at = NOW()
#[, name = #{name?}]
#[, email = #{email?}]
WHERE id = #{user_id}
RETURNING id, name, email, profile, updated_at
File path determines the generated module and function name: queries/{module}/{function}.sql. Both must be valid Rust identifiers.
All metadata is optional. When omitted, sensible defaults are used. See Configuration Options for the full reference.
4. Use the generated functions
use Client;
async
5. CLI Usage (alternative to build.rs)
AutoModel also ships as a standalone CLI for use outside of build.rs:
# Generate code from queries directory
# Generate with custom output file
# Dry run (see generated code without writing files)
# Help
Configuration Options
AutoModel uses SQL files with embedded metadata to define queries and their configuration. Here's a comprehensive guide to all configuration options:
SQL File Structure
Each .sql file in the queries/{module}/ directory contains:
- Optional metadata block (in YAML format within SQL comments)
- The SQL query
-- @automodel
-- description: Query description
-- expect: exactly_one
-- # ... other configuration options
-- @end
SELECT * FROM users WHERE id = #{id}
Default Configuration
Defaults are configured in build.rs when calling AutoModel::generate():
async
Telemetry Levels:
none- No instrumentationinfo- Basic span creation with function namedebug- Include SQL query in span (if include_sql is true)trace- Include both SQL query and parameters in span
Query Analysis Features:
- Sequential scan detection: Automatically detects queries that perform full table scans
- Warnings during build: Identifies queries that might benefit from indexing
Query Configuration
Each query is defined in its own .sql file: queries/{module}/{query_name}.sql
The metadata block supports these options:
Minimal Example
-- @automodel
-- @end
SELECT id, name FROM users WHERE id = #{id}
If no metadata is provided, sensible defaults are used.
All Available Options
-- @automodel
-- description: Retrieve a user by their ID # Function documentation
-- module: custom_module # Override directory-based module name
-- expect: exactly_one # exactly_one | possible_one | at_least_one | multiple
-- types: # Custom type mappings
-- profile: "crate::models::UserProfile" # query params/output by name
-- public.positive_int: "std::num::NonZeroI32" # domain type alias override
-- public.users.social_links: "Vec<crate::models::UserSocialLink>" # composite type field
-- telemetry: # Per-query telemetry settings
-- level: trace
-- include_params: [id, name]
-- include_sql: false
-- ensure_indexes: true # Enable performance analysis
-- multiunzip: false # Enable for UNNEST-based batch inserts
-- conditions_type: false # Use old/new struct for conditional queries
-- parameters_type: false # Group all parameters into one struct
-- return_type: "UserInfo" # Custom return type name
-- error_type: "UserError" # Custom error type name
-- conditions_type_derives: # Additional derives for conditions struct
-- - serde::Serialize
-- parameters_type_derives: # Additional derives for parameters struct
-- - serde::Deserialize
-- return_type_derives: # Additional derives for return struct
-- - serde::Serialize
-- - PartialEq
-- error_type_derives: # Additional derives for error enum
-- - serde::Serialize
-- @end
SELECT id, name FROM users WHERE id = #{id}
Expected Result Types
Controls how the query is executed and what it returns:
-- @automodel
-- expect: exactly_one # fetch_one() -> Result<T, Error> - Fails if 0 or >1 rows
-- @end
-- @automodel
-- expect: possible_one # fetch_optional() -> Result<Option<T>, Error> - 0 or 1 row
-- @end
-- @automodel
-- expect: at_least_one # fetch_all() -> Result<Vec<T>, Error> - Fails if 0 rows
-- @end
-- @automodel
-- expect: multiple # fetch_all() -> Result<Vec<T>, Error> - 0 or more rows (default for collections)
-- @end
Custom Type Mappings
Override PostgreSQL-to-Rust type mappings for specific fields:
-- @automodel
-- types:
-- profile: "crate::models::UserProfile" # For input parameters and output fields with this name
-- users.profile: "crate::models::UserProfile" # For output fields from specific table (when using JOINs)
-- posts.metadata: "crate::models::PostMetadata"
-- status: "UserStatus" # Custom enum types
-- category: "crate::enums::Category"
-- @end
SELECT id, name, profile FROM users WHERE id = #{id}
JSON Wrapper Control:
By default, custom types use JSON serialization. Control this with suffixes:
-- @automodel
-- types:
-- profile: "UserProfile@json" # Force JSON wrapper (default)
-- uuid: "MyUuid@native" # No wrapper - type implements sqlx traits
-- data: "Vec<Option<i32>>@native" # Native binding for complex types
-- @end
@native: Type implementssqlx::Encode/Decode(ortokio_postgres::ToSql/FromSql)@jsonor no suffix: Uses JSON serialization (requiresserde::Serialize/Deserialize)
Composite Type Field Mappings:
Use 3-segment keys (schema.type.field) to map fields inside PostgreSQL composite types. This changes the generated struct field type from serde_json::Value to your custom Rust type, wrapped in sqlx::types::Json<T>:
-- @automodel
-- types:
-- public.user_with_links_input.social_links: "Vec<crate::models::UserSocialLink>"
-- @end
INSERT INTO public.users (name, email, social_links)
SELECT r.name, r.email, r.social_links
FROM UNNEST(#{items}::public.user_with_links_input[]) AS r(name, email, social_links)
RETURNING id, name, email, social_links
This generates the composite type struct with a typed field instead of serde_json::Value:
Key details:
jsonbfields → wrapped asJson<T>(e.g.,Option<Json<Vec<UserSocialLink>>>)jsonb[]fields → per-element wrapping asVec<Json<T>>(e.g.,Vec<Option<Json<UserTag>>>)- Works for both standalone composite types (
CREATE TYPE) and table-backed types - The
@json/@nativesuffixes apply here too - Mappings are global: if two queries reference the same composite type field, both must specify the same target type (conflicting mappings produce a build error)
- Multiple queries can contribute mappings for different fields of the same composite type
-- Both queries map the same composite type field — types must agree
-- Query A:
-- types:
-- public.users.social_links: "Vec<crate::models::UserSocialLink>"
-- Query B:
-- types:
-- public.users.social_links: "Vec<crate::models::UserSocialLink>" # OK: same type
-- public.users.profile: "crate::models::UserProfile" # OK: different field
Domain Type Alias Mappings:
PostgreSQL domain types (CREATE DOMAIN) are detected automatically and generated as Rust type aliases:
(VALUE > 0);
(255) CHECK (VALUE ~* '^[^@]+@[^@]+$');
Generated (default):
pub type PositiveInt = i32;
pub type EmailAddress = String;
Use 2-segment keys (schema.domain_name) in types: to override the alias target:
-- @automodel
-- types:
-- public.positive_int: "std::num::NonZeroI32"
-- @end
Generated (with override):
pub type PositiveInt = NonZeroI32;
Domain CHECK constraints are also included in error type enums for mutation queries (e.g., PositiveIntCheck, EmailAddressCheck).
Type mapping key summary:
| Key format | Segments | Purpose | Example |
|---|---|---|---|
field_name |
1 | Map parameter/column by name | profile: "UserProfile" |
schema.domain |
2 | Override domain type alias | public.positive_int: "NonZeroI32" |
schema.type.field |
3 | Map composite type field | public.users.social_links: "Vec<Link>" |
Named Parameters
Use #{parameter_name} syntax in SQL queries:
SELECT * FROM users WHERE id = #{user_id} AND status = #{status}
Optional Parameters:
Add ? suffix for optional parameters that become Option<T>:
SELECT * FROM posts
WHERE user_id = #{user_id}
AND (#{category?} IS NULL OR category = #{category?})
Optional + Nullable Parameters (??):
Use ?? suffix in conditional blocks when a parameter is both optional (controls block inclusion) and nullable (can set the column to NULL). Generates Option<Option<T>>:
UPDATE users
SET updated_at = NOW
#[, age = #{age??}]
WHERE id = #{user_id}
RETURNING *
None→ skip the conditional block entirely (no change)Some(None)→ include the block, set value to NULLSome(Some(35))→ include the block, set value to 35
Array Parameters with Nullable Elements ([?]):
Use [?] suffix for array parameters where individual elements can be NULL, resulting in Vec<Option<T>>:
INSERT INTO users (name, email, age)
SELECT * FROM UNNEST(
#{names}::text[],
#{emails}::text[],
#{ages[?]}::int4[] -- Vec<Option<i32>>: array where elements can be NULL
)
Parameter Suffix Reference:
| Suffix | Generated Type | Use Case |
|---|---|---|
| (none) | T |
Required parameter |
? |
Option<T> |
Optional / conditional block parameter |
?? |
Option<Option<T>> |
Conditional block + nullable (skip / set NULL / set value) |
[?] |
Vec<Option<T>> |
Array with nullable elements |
?[?] |
Option<Vec<Option<T>>> |
Optional array with nullable elements |
??[?] |
Option<Option<Vec<Option<T>>>> |
Conditional + nullable array with nullable elements |
Suffixes are orthogonal and compose: ? controls optionality, second ? adds value nullability, [?] adds element nullability.
Note: Top-level
Option<>in type mappings is banned. Use suffix annotations instead. If a custom type mapping likeVec<Option<T>>already has nullable elements, the[?]suffix is a no-op (no double-wrapping).
Per-Query Telemetry Configuration
Override global telemetry settings for specific queries in the metadata block:
-- @automodel
-- telemetry:
-- level: trace # none | info | debug | trace
-- include_params: [user_id, email] # Only these parameters logged
-- include_sql: true # Include SQL in spans
-- @end
SELECT * FROM users WHERE id = #{user_id}
Per-Query Analysis Configuration
Override global analysis settings for specific queries:
-- @automodel
-- ensure_indexes: true # Enable/disable analysis for this query
-- @end
SELECT * FROM users WHERE email = #{email}
Module Organization
Generated functions are organized into modules based on directory structure:
queries/
├── users/ # Generated as src/generated/users.rs
│ ├── get_user.sql
│ └── create_user.sql
├── posts/ # Generated as src/generated/posts.rs
│ └── get_post.sql
└── admin/ # Generated as src/generated/admin.rs
└── health_check.sql
You can override the module name in the metadata:
-- @automodel
-- module: custom_module # Override directory-based module name
-- @end
Complete Examples
Simple query with custom type:
queries/users/get_user_profile.sql:
-- @automodel
-- description: Get user profile with custom JSON type
-- expect: possible_one
-- types:
-- profile: "crate::models::UserProfile"
-- telemetry:
-- level: trace
-- include_params: [user_id]
-- include_sql: true
-- ensure_indexes: true
-- @end
SELECT id, name, profile
FROM users
WHERE id = #{user_id}
Query with optional parameter:
queries/posts/search_posts.sql:
-- @automodel
-- description: Search posts with optional category filter
-- expect: multiple
-- types:
-- category: "PostCategory"
-- metadata: "crate::models::PostMetadata"
-- ensure_indexes: true
-- @end
SELECT * FROM posts
WHERE user_id = #{user_id}
AND (#{category?} IS NULL OR category = #{category?})
DDL query without analysis:
queries/setup/create_sessions_table.sql:
-- @automodel
-- description: Create sessions table
-- ensure_indexes: false
-- @end
(
id UUID PRIMARY KEY,
created_at TIMESTAMPTZ DEFAULT NOW
)
Bulk operation with minimal telemetry:
queries/admin/cleanup_old_sessions.sql:
-- @automodel
-- description: Remove sessions older than cutoff date
-- expect: exactly_one
-- telemetry:
-- include_params: [] # Skip all parameters for privacy
-- include_sql: false
-- @end
DELETE FROM sessions
WHERE created_at < #{cutoff_date}
Conditional Queries
AutoModel supports conditional queries that dynamically include or exclude SQL clauses based on parameter availability. This allows you to write flexible queries that adapt based on which optional parameters are provided.
Conditional Syntax
Use the #[...] syntax to wrap optional SQL parts:
queries/users/search_users.sql:
-- @automodel
-- description: Search users with optional name and age filters
-- @end
SELECT id, name, email
FROM users
WHERE 1=1
#[AND name ILIKE #{name_pattern?}]
#[AND age >= #{min_age?}]
ORDER BY created_at DESC
Key Components:
#[AND name ILIKE #{name_pattern?}]- Conditional block that includes the clause only ifname_patternisSome#{name_pattern?}- Optional parameter (note the?suffix)- The conditional block is removed entirely if the parameter is
None
Runtime SQL Examples
The same function generates different SQL based on parameter availability:
// Both parameters provided
search_users.await?;
// SQL: "SELECT id, name, email FROM users WHERE 1=1 AND name ILIKE $1 AND age >= $2 ORDER BY created_at DESC"
// Params: ["%john%", 25]
// Only name pattern provided
search_users.await?;
// SQL: "SELECT id, name, email FROM users WHERE 1=1 AND name ILIKE $1 ORDER BY created_at DESC"
// Params: ["%john%"]
// Only age provided
search_users.await?;
// SQL: "SELECT id, name, email FROM users WHERE 1=1 AND age >= $1 ORDER BY created_at DESC"
// Params: [25]
// No optional parameters
search_users.await?;
// SQL: "SELECT id, name, email FROM users WHERE 1=1 ORDER BY created_at DESC"
// Params: []
Complex Conditional Queries
You can mix conditional and non-conditional parameters:
queries/users/find_users_complex.sql:
-- @automodel
-- description: Complex search with required name pattern and optional filters
-- @end
SELECT id, name, email, age
FROM users
WHERE name ILIKE #{name_pattern}
#[AND age >= #{min_age?}]
AND email IS NOT NULL
#[AND created_at >= #{since?}]
ORDER BY name
This generates a function with signature:
pub async
Best Practices
- Use
WHERE 1=1as a base condition when all WHERE clauses are conditional:SELECT * FROM users WHERE 1=1 #[AND name = #{name?}] #[AND age > #{min_age?}]
Conditional UPDATE Statements
Conditional syntax is also useful for UPDATE statements where you want to update only certain fields based on which parameters are provided:
queries/users/update_user_fields.sql:
-- @automodel
-- description: Update user fields conditionally - only updates fields that are provided (not None)
-- expect: exactly_one
-- @end
UPDATE users
SET updated_at = NOW
#[, name = #{name?}]
#[, email = #{email?}]
#[, age = #{age?}]
WHERE id = #{user_id}
RETURNING id, name, email, age, updated_at
This generates a function that allows partial updates:
// Update only the name
update_user_fields.await?;
// SQL: "UPDATE users SET updated_at = NOW(), name = $1 WHERE id = $2 RETURNING ..."
// Update only the age
update_user_fields.await?;
// SQL: "UPDATE users SET updated_at = NOW(), age = $1 WHERE id = $2 RETURNING ..."
// Update multiple fields
update_user_fields.await?;
// SQL: "UPDATE users SET updated_at = NOW(), name = $1, email = $2 WHERE id = $3 RETURNING ..."
// Update all fields
update_user_fields.await?;
// SQL: "UPDATE users SET updated_at = NOW(), name = $1, email = $2, age = $3 WHERE id = $4 RETURNING ..."
Note: Always include at least one non-conditional SET clause (like updated_at = NOW()) to ensure the UPDATE statement is syntactically valid even when all optional parameters are None.
Struct Configuration and Reuse
AutoModel provides four powerful configuration options that allow you to customize how structs and error types are generated and reused across queries: parameters_type, conditions_type, return_type, and error_type. These options enable you to eliminate code duplication, improve type safety, and create cleaner APIs.
Overview
| Option | Purpose | Default | Accepts | Generates |
|---|---|---|---|---|
parameters_type |
Group query parameters into a struct | false |
true or struct name |
{QueryName}Params struct |
conditions_type |
Diff-based conditional parameters | false |
true or struct name |
{QueryName}Params struct with old/new comparison |
return_type |
Custom name for return type struct | auto | struct name or omit | Custom named or {QueryName}Item struct |
error_type |
Custom name for error constraint enum (mutations only) | auto | error type name or omit | Custom named or {QueryName}Constraints enum |
Any structure or error type generated can be referenced by other queries. AutoModel validates at build time that the types are compatible and constraints match exactly.
parameters_type: Structured Parameters
Group all query parameters into a single struct instead of passing them individually. Makes function calls cleaner and enables parameter reuse.
Basic Usage:
queries/users/insert_user_structured.sql:
-- @automodel
-- parameters_type: true # Generates InsertUserStructuredParams
-- @end
INSERT INTO users (name, email, age)
VALUES (#{name}, #{email}, #{age})
RETURNING id
Generated Code:
pub async
Usage:
let params = InsertUserStructuredParams ;
insert_user_structured.await?;
Struct Reuse:
Specify an existing struct name to reuse it across queries:
queries/users/get_user_by_id_and_email.sql:
-- @automodel
-- parameters_type: true # Generates GetUserByIdAndEmailParams
-- @end
SELECT id, name, email FROM users WHERE id = #{id} AND email = #{email}
queries/users/delete_user_by_id_and_email.sql:
-- @automodel
-- parameters_type: "GetUserByIdAndEmailParams" # Reuses existing struct
-- @end
DELETE FROM users WHERE id = #{id} AND email = #{email} RETURNING id
Only one struct definition is generated, shared by both functions.
conditions_type: Diff-Based Conditional Parameters
For queries with conditional SQL (#[...] blocks), generate a struct and compare old vs new values to decide which clauses to include. Works with any query type (SELECT, UPDATE, DELETE, etc.).
Basic Usage:
queries/users/update_user_fields_diff.sql:
-- @automodel
-- conditions_type: true # Generates UpdateUserFieldsDiffParams
-- @end
UPDATE users
SET updated_at = NOW
#[, name = #{name?}]
#[, email = #{email?}]
WHERE id = #{user_id}
Generated Code:
pub async
Usage:
let old = UpdateUserFieldsDiffParams ;
let new = UpdateUserFieldsDiffParams ;
update_user_fields_diff.await?;
// Only executes: UPDATE users SET updated_at = NOW(), name = $1 WHERE id = $2
How It Works:
- The struct contains only conditional parameters (those ending with
?or??) - Non-conditional parameters remain as individual function parameters
- At runtime, the function compares
old.field != new.field - Only clauses where the field differs are included in the query
Nullable Fields with ??:
Use ?? in conditional blocks when a field is nullable (e.g., age column that allows NULL):
-- @automodel
-- conditions_type: true
-- @end
UPDATE users
SET updated_at = NOW
#[, name = #{name?}]
#[, age = #{age??}]
WHERE id = #{user_id}
With conditions_type, the diff comparison works naturally: if old.age != new.age, the clause is included — and new.age being None means "set to NULL".
Struct Reuse:
queries/users/update_user_profile_diff.sql:
-- @automodel
-- conditions_type: true
-- @end
UPDATE users
SET updated_at = NOW
#[, name = #{name?}]
#[, email = #{email?}]
WHERE id = #{user_id}
queries/users/update_user_metadata_diff.sql:
-- @automodel
-- conditions_type: "UpdateUserProfileDiffParams" # Reuses existing diff struct
-- @end
UPDATE users
SET updated_at = NOW
#[, name = #{name?}]
#[, email = #{email?}]
WHERE id = #{user_id}
return_type: Custom Return Type Names
Customize the name of return type structs (generated for multi-column SELECT queries) and enable struct reuse across queries.
Basic Usage:
queries/users/get_user_summary.sql:
-- @automodel
-- return_type: "UserSummary" # Custom name instead of GetUserSummaryItem
-- @end
SELECT id, name, email FROM users WHERE id = #{user_id}
Generated Code:
pub async
Struct Reuse:
Multiple queries returning the same columns can share the same struct:
queries/users/get_user_summary.sql:
-- @automodel
-- return_type: "UserSummary" # Generates the struct
-- @end
SELECT id, name, email FROM users WHERE id = #{user_id}
queries/users/get_user_info_by_email.sql:
-- @automodel
-- return_type: "UserSummary" # Reuses the struct
-- @end
SELECT id, name, email FROM users WHERE email = #{email}
queries/users/get_all_user_summaries.sql:
-- @automodel
-- return_type: "UserSummary" # Reuses the struct
-- @end
SELECT id, name, email FROM users ORDER BY name
Only one UserSummary struct is generated, shared by all three functions.
Cross-Struct Reuse
You can reuse struct names across queries. AutoModel will:
- Auto-generate if the struct doesn't exist yet (from the first query that uses it)
- Reuse if the struct already exists (from a previous query in the same module)
- Validate that fields match exactly when reusing
queries/users/get_user_info.sql:
-- @automodel
-- return_type: "UserInfo" # First use: generates UserInfo struct from return columns
-- @end
SELECT id, name, email FROM users WHERE id = #{user_id}
queries/users/update_user_info.sql:
-- @automodel
-- parameters_type: "UserInfo" # Second use: reuses existing UserInfo struct for parameters
-- @end
UPDATE users SET name = #{name}, email = #{email} WHERE id = #{id}
Usage:
// Get user info
let user = get_user_info.await?;
// Modify and update using the same struct
let updated = UserInfo ;
update_user_info.await?;
Custom Derive Traits
Add additional derive traits to generated structs and enums using *_derives options. These are combined with the global defaults configured in your build.rs.
Global Default Derives
Configure derive traits that apply to all generated types in your build.rs:
let defaults = DefaultsConfig ;
This ensures all generated structs include Clone in addition to the always-present Debug trait.
Per-Query Additional Derives
Add query-specific derive traits that append to the global defaults:
-- @automodel
-- return_type: "UserId"
-- return_type_derives:
-- - serde::Serialize
-- - serde::Deserialize
-- - PartialEq
-- - Eq
-- @end
SELECT id FROM users WHERE email = #{email}
Generates:
Note: Clone comes from global defaults, serde traits and PartialEq/Eq from per-query config.
Available Options:
conditions_type_derives- For conditions struct (used withconditions_type)parameters_type_derives- For parameters struct (used withparameters_type)return_type_derives- For return type structerror_type_derives- For constraint error enum
Trait Merging:
- Global defaults are applied first
- Per-query derives are appended
- Duplicates are automatically removed
Debugis always included by default
Build-Time Validation
AutoModel validates struct field compatibility at build time:
- Auto-Generation: If a named struct doesn't exist, AutoModel automatically generates it from the query
- Field Matching: When reusing an existing struct, query parameters/columns must exactly match struct fields (names and types)
- Clear Error Messages: Validation failures provide helpful guidance
Example validation errors:
Error: Query parameter 'age' not found in struct 'UserInfo'.
Available fields: id, name, email
Error: Type mismatch for parameter 'id' in struct 'UserInfo':
expected 'i64', but query requires 'i32'
Struct Definition Sources
Structs can be generated from three sources:
- parameters_type: true →
{QueryName}Params(input parameters) - conditions_type: true →
{QueryName}Params(conditional input parameters) - return_type: "Name" → Custom named struct (output columns)
- Multi-column SELECT →
{QueryName}Item(output columns, when return_type not specified)
When to Use Each Option
Use parameters_type:
- Queries with 3+ parameters where individual params become unwieldy
- Building query parameters from existing structs or API input
- Reusing parameter sets with slight modifications
- Improving code organization and reducing function signature complexity
Use conditions_type:
- Conditional queries (
#[...]) with state comparison logic - UPDATE queries that should only modify changed fields
- SELECT queries with filters that should only apply when criteria changed
- Implementing PATCH-style REST endpoints
- Avoiding the verbosity of many
Option<T>parameters
Use return_type:
- Multiple queries returning the same column structure
- Creating domain-specific struct names (e.g.,
UserSummaryinstead ofGetUserItem) - Reusing return types as input parameters for related queries
- Building consistent DTOs across your API
Complete Example
queries/users/get_user_summary.sql:
-- @automodel
-- return_type: "UserSummary" # Define a common return type
-- @end
SELECT id, name, email FROM users WHERE id = #{user_id}
queries/users/search_users.sql:
-- @automodel
-- return_type: "UserSummary" # Reuse it in other queries
-- @end
SELECT id, name, email FROM users WHERE name ILIKE #{pattern}
queries/users/update_user_contact.sql:
-- @automodel
-- parameters_type: "UserSummary" # Use it as input parameters
-- @end
UPDATE users SET name = #{name}, email = #{email} WHERE id = #{id}
queries/users/partial_update_user.sql:
-- @automodel
-- conditions_type: true # Generates PartialUpdateUserParams
-- @end
UPDATE users
SET updated_at = NOW
#[, name = #{name?}]
#[, email = #{email?}]
WHERE id = #{user_id}
Generated Code:
// Single struct definition shared across queries
pub async
Notes
- Auto-generation of named structs: If a struct name is specified but doesn't exist yet, AutoModel generates it automatically
- Struct reuse from previous queries: You can reference structs generated by earlier queries in the same module
- Exact field matching: When reusing existing structs, all query parameters/columns must match struct fields exactly
- No subset matching: You cannot use a struct with extra fields; all fields must match
- parameters_type ignored when conditions_type is enabled: Diff-based queries already use structured parameters
Batch Insert with UNNEST Pattern
AutoModel supports efficient batch inserts using PostgreSQL's UNNEST function, which allows you to insert multiple rows in a single query. This is much more efficient than inserting rows one at a time.
Basic UNNEST Pattern
PostgreSQL's UNNEST function can expand multiple arrays into a set of rows:
INSERT INTO users (name, email, age)
SELECT * FROM UNNEST(
ARRAY['Alice', 'Bob', 'Charlie'],
ARRAY['alice@example.com', 'bob@example.com', 'charlie@example.com'],
ARRAY[25, 30, 35]
)
RETURNING id, name, email, age, created_at;
Using UNNEST with AutoModel
Define a batch insert query in a SQL file:
queries/users/insert_users_batch.sql:
-- @automodel
-- description: Insert multiple users using UNNEST pattern
-- expect: multiple
-- multiunzip: true
-- @end
INSERT INTO users (name, email, age)
SELECT * FROM UNNEST(#{name}::text[], #{email}::text[], #{age}::int4[])
RETURNING id, name, email, age, created_at
Key Points:
- Use array parameters:
#{name}::text[],#{email}::text[], etc. - Include explicit type casts for proper type inference
- Set
expect: "multiple"to return a vector of results - Set
multiunzip: trueto enable the special batch insert mode
The multiunzip Configuration Parameter
When multiunzip: true is set, AutoModel generates special code to handle batch inserts more ergonomically:
Without multiunzip (standard array parameters):
// You would need to pass separate arrays for each column
insert_users_batch.await?;
With multiunzip: true (generates a record struct):
// AutoModel generates an InsertUsersBatchRecord struct
// Now you can pass a single vector of records
insert_users_batch.await?;
Nullable Elements in Batch Inserts
Both with and without multiunzip, you can use the ?? suffix to indicate array elements can be NULL:
Without multiunzip:
-- @automodel
-- expect: multiple
-- @end
INSERT INTO users (name, email, age)
SELECT * FROM UNNEST(
#{names}::text[],
#{emails}::text[],
#{ages??}::int4[] -- Array where individual elements can be NULL
)
Generated function signature:
pub async
With multiunzip:
-- @automodel
-- expect: multiple
-- multiunzip: true
-- @end
INSERT INTO users (name, email, age)
SELECT * FROM UNNEST(
#{name}::text[],
#{email}::text[],
#{age?}::int4[] -- Use ? in struct field for optional
)
Generated struct with optional field:
// Unpacks to Vec<Option<i32>> via multiunzip
How multiunzip Works
When multiunzip: true is enabled:
- Generates an input record struct with fields matching your parameters
- Uses itertools::multiunzip() to transform
Vec<Record>into tuple of arrays(Vec<name>, Vec<email>, Vec<age>) - Binds each array to the corresponding SQL parameter
Generated function signature:
pub async
Internal implementation:
use Itertools;
// Transform Vec<Record> into separate arrays
let : =
items
.into_iter
.map
.multiunzip;
// Bind each array to the query
let query = query.bind;
let query = query.bind;
let query = query.bind;
Multiunzip Crate Selection
By default, AutoModel uses itertools::multiunzip() which supports up to 12 parameters. For batch inserts with more than 12 columns, you can configure AutoModel to use the many-unzip crate instead, which supports up to 196 parameters.
Configure in your build.rs:
let defaults = DefaultsConfig ;
When to use which:
MultiunzipCrate::Itertools(default): For queries with up to 12 parameters. Most common use case.MultiunzipCrate::ManyUnzip: For queries with 13-196 parameters. Requires addingmany-unzipto yourCargo.toml:
[]
= "0.1" # or latest version
The generated code automatically uses the correct trait based on your configuration:
- Itertools:
use itertools::Itertools; - ManyUnzip:
use many_unzip::ManyUnzip;
Both crates provide the same .multiunzip() method, so the rest of the generated code remains identical.
Complete Example
queries/posts/insert_posts_batch.sql:
-- @automodel
-- description: Batch insert multiple posts
-- expect: multiple
-- multiunzip: true
-- @end
INSERT INTO posts (title, content, author_id, published_at)
SELECT * FROM UNNEST(
#{title}::text[],
#{content}::text[],
#{author_id}::int4[],
#{published_at}::timestamptz[]
)
RETURNING id, title, author_id, created_at
Usage:
use crate;
let posts = vec!;
let inserted = insert_posts_batch.await?;
println!;
Array Columns in Batch Inserts (jsonb[], text[], etc.)
PostgreSQL's UNNEST flattens multidimensional arrays. This means you cannot pass jsonb[][] or text[][] to insert into a column of type jsonb[] or text[] — UNNEST would flatten the nested arrays into individual elements instead of producing one array per row.
The workaround is to pass each row's array value as a single jsonb (a JSON array), then reconstruct the PostgreSQL array in SQL using jsonb_array_elements:
For nullable array columns (jsonb[] DEFAULT NULL):
-- @automodel
-- expect: multiple
-- multiunzip: true
-- types:
-- tags: "Vec<Option<crate::models::UserTag>>"
-- public.users.tags: "Vec<Option<crate::models::UserTag>>"
-- @end
INSERT INTO public.users (name, email, tags)
SELECT name, email,
CASE WHEN tags IS NULL THEN NULL
ELSE ARRAY(SELECT jsonb_array_elements(tags)) END
FROM UNNEST(
#{name}::text [],
#{email}::text [],
#{tags}::jsonb []
) AS t(name, email, tags)
RETURNING id, name, email, tags;
For required array columns (jsonb[] NOT NULL):
-- @automodel
-- expect: multiple
-- multiunzip: true
-- types:
-- labels: "Vec<Option<crate::models::UserTag>>"
-- public.users.labels: "Vec<Option<crate::models::UserTag>>"
-- @end
INSERT INTO public.users (name, email, labels)
SELECT name, email,
ARRAY(SELECT jsonb_array_elements(labels))
FROM UNNEST(
#{name}::text [],
#{email}::text [],
#{labels}::jsonb []
) AS t(name, email, labels)
RETURNING id, name, email, labels;
How it works:
- The generated Rust code automatically serializes each row's array value to a
jsonbvalue (a JSON array like[{"label":"rust"},{"label":"go"}]) — this is transparent to the caller UNNESTonjsonb[]yields onejsonbscalar per row — no flatteningARRAY(SELECT jsonb_array_elements(tags))reconstructs thejsonb[]from the JSON array- For nullable columns, the
CASE WHEN ... IS NULL THEN NULLguard preserves SQL NULLs
The types: annotation maps both the parameter and output column to your custom Rust type (e.g. Vec<Option<crate::models::UserTag>>). AutoModel handles serialization/deserialization of each element individually.
Why not
jsonb[][]? PostgreSQL requires uniform sub-array lengths in multidimensional arrays andUNNESTflattens all dimensions. These constraints maketype[][]unusable for variable-length per-row arrays.
For plain text[] columns (using jsonb_array_elements_text to reconstruct):
-- @automodel
-- expect: multiple
-- multiunzip: true
-- @end
INSERT INTO public.items (name, tags)
SELECT name,
ARRAY(SELECT jsonb_array_elements_text(tags))::text[]
FROM UNNEST(
#{name}::text [],
#{tags}::jsonb []
) AS t(name, tags)
RETURNING id, name, tags;
The pattern is the same as for jsonb[] — in the SQL, the parameter is declared as jsonb[] so that UNNEST receives flat scalars. AutoModel's generated code automatically serializes the Rust Vec<String> values to JSON arrays before binding, so the conversion is transparent to the caller. On the SQL side, jsonb_array_elements_text() extracts text values from each JSON array, and ARRAY(...)::text[] reconstructs the text[] column.
UNNEST with Composite Types
As an alternative to the multiunzip pattern (where each column is a separate array parameter), you can use PostgreSQL composite types with UNNEST. Instead of passing N separate arrays in the SQL, you pass a single array of a composite (row) type. AutoModel auto-generates the corresponding Rust struct with Encode, Decode, Type, and PgHasArrayType implementations.
From the caller's perspective, both approaches look the same — you pass a Vec<SomeStruct> and get results back. The difference is in how the SQL query is written and what happens under the hood: multiunzip splits the struct into separate arrays internally, while composite types bind a single typed array directly to PostgreSQL.
When to prefer composite types over multiunzip:
- Your input rows have nested structure (e.g., composite fields within composites)
- You don't want the
itertools/many-unzipcrate dependency - No
multiunzip: truemetadata is needed — the composite type is detected automatically - You want to leverage PostgreSQL's type system for input validation
Step 1: Define a composite type in PostgreSQL:
(
name TEXT,
email TEXT,
social_links JSONB
);
Step 2: Write the query using the composite type array:
queries/users_array_fields/insert_users_bulk_composite.sql:
-- @automodel
-- description: Bulk insert users with social links using composite type UNNEST
-- expect: multiple
-- types:
-- public.users.social_links: "Vec<crate::models::UserSocialLink>"
-- @end
INSERT INTO public.users (name, email, social_links)
SELECT r.name, r.email, r.social_links
FROM UNNEST(#{items}::public.user_with_links_input[]) AS r(name, email, social_links)
RETURNING id, name, email, social_links
No multiunzip: true is needed. AutoModel detects the composite type from the ::public.user_with_links_input[] cast and generates:
// Auto-generated composite type struct with sqlx trait impls
// Function accepts a single Vec of the composite type
pub async
Step 3: Use in Rust code:
use crateUserSocialLink;
let items = vec!;
let results = insert_users_bulk_composite.await?;
Comparison: multiunzip vs composite type UNNEST
| Aspect | multiunzip: true |
Composite type |
|---|---|---|
| Rust caller API | Vec<Record> |
Vec<CompositeType> (same feel) |
| SQL parameter style | Separate arrays: #{name}::text[], #{email}::text[] |
Single array: #{items}::composite_type[] |
| Under the hood | Struct split into arrays via multiunzip() |
Array of composite bound directly to PG |
| Requires DDL | No (uses built-in types) | Yes (CREATE TYPE) |
| Metadata config | multiunzip: true |
None (auto-detected) |
| Nested composites | Not supported | Supported (composites within composites) |
| Dependencies | itertools or many-unzip crate |
None |
Both approaches produce the same result — efficient bulk inserts via a single INSERT ... SELECT * FROM UNNEST(...) statement.
Upsert Pattern (INSERT ... ON CONFLICT)
PostgreSQL's ON CONFLICT clause allows you to handle conflicts when inserting data, enabling "upsert" operations (insert if new, update if exists). AutoModel fully supports this pattern for both single-row and batch operations.
Understanding EXCLUDED
In the DO UPDATE clause, EXCLUDED is a special table reference provided by PostgreSQL that contains the row that would have been inserted if there had been no conflict. This allows you to reference the attempted insert values.
INSERT INTO users (email, name, age)
VALUES ('alice@example.com', 'Alice', 25)
ON CONFLICT (email)
DO UPDATE SET
name = EXCLUDED.name, -- Use the name from the VALUES clause
age = EXCLUDED.age, -- Use the age from the VALUES clause
updated_at = NOW -- Set updated_at to current timestamp
In this example:
EXCLUDED.namerefers to'Alice'(the value being inserted)EXCLUDED.agerefers to25(the value being inserted)users.nameandusers.agerefer to the existing row's values in the table
You can also mix both references:
-- Only update if the new age is greater than the existing age
DO UPDATE SET age = EXCLUDED.age WHERE users.age < EXCLUDED.age
Single Row Upsert
Use ON CONFLICT to update existing rows when a conflict occurs:
queries/users/upsert_user.sql:
-- @automodel
-- description: Insert a new user or update if email already exists
-- expect: exactly_one
-- types:
-- profile: "crate::models::UserProfile"
-- @end
INSERT INTO users (email, name, age, profile)
VALUES (#{email}, #{name}, #{age}, #{profile})
ON CONFLICT (email)
DO UPDATE SET
name = EXCLUDED.name,
age = EXCLUDED.age,
profile = EXCLUDED.profile,
updated_at = NOW
RETURNING id, email, name, age, created_at, updated_at
Usage:
use crateupsert_user;
use crateUserProfile;
// First insert - creates new user
let user = upsert_user.await?;
// Second call with same email - updates existing user
let updated_user = upsert_user.await?;
// Same ID, but updated fields
assert_eq!;
Batch Upsert with UNNEST
Combine UNNEST with ON CONFLICT for efficient batch upserts:
queries/users/upsert_users_batch.sql:
-- @automodel
-- description: Batch upsert users - insert new or update existing by email
-- expect: multiple
-- multiunzip: true
-- @end
INSERT INTO users (email, name, age)
SELECT * FROM UNNEST(
#{email}::text[],
#{name}::text[],
#{age}::int4[]
)
ON CONFLICT (email)
DO UPDATE SET
name = EXCLUDED.name,
age = EXCLUDED.age,
updated_at = NOW
RETURNING id, email, name, age, created_at, updated_at
Usage:
use crate;
let users = vec!;
let results = upsert_users_batch.await?;
// Returns 2 rows: Bob (new) and Alice (updated)
println!;
CLI Features
Commands
generate- Generate Rust code from SQL query files
CLI Options
Generate Command
-d, --database-url <URL>- Database connection URL-q, --queries-dir <DIR>- Directory containing SQL query files-o, --output <FILE>- Custom output file path-m, --module <NAME>- Module name for generated code--dry-run- Preview generated code without writing files
Examples
The example-app/ directory contains:
queries/- SQL files with query definitions organized by modulemigrations/- Database schema migrations for testing
Workspace Commands
# Build everything
# Test the library
# Run the CLI tool
# Run the example app
# Check specific package
Error Handling and Custom Error Types
AutoModel provides sophisticated error handling with automatic constraint extraction and type-safe error types. Different types of queries return different error types based on whether they can violate database constraints.
Error Type Overview
AutoModel generates two types of error enums:
ErrorReadOnly- For SELECT queries that cannot violate constraintsError<C>- For mutation queries (INSERT, UPDATE, DELETE) with constraint tracking
ErrorReadOnly - For Read-Only Queries
All SELECT queries return ErrorReadOnly, a simple error enum without constraint violation variants:
Generated Code:
Example Usage:
queries/users/get_user_by_id.sql:
-- @automodel
-- expect: exactly_one
-- @end
SELECT id, name, email FROM users WHERE id = #{user_id}
Generated function:
pub async
Error - For Mutation Queries
Mutation queries (INSERT, UPDATE, DELETE) return Error<C> where C is a query-specific constraint enum. This provides type-safe handling of constraint violations.
Automatic Constraint Extraction
AutoModel automatically extracts all constraints from your PostgreSQL database for each table referenced in mutation queries. This happens at build time by querying the PostgreSQL system catalogs.
Extracted Constraint Information:
- Unique constraints - Including primary keys and unique indexes
- Foreign key constraints - With referenced table and column information
- Check constraints - With constraint expression
- NOT NULL constraints - For columns that cannot be null
- Domain check constraints - CHECK constraints from domain types used by table columns
Example: For a users table with:
(
id SERIAL PRIMARY KEY,
email TEXT UNIQUE NOT NULL,
age INT CHECK (age >= 0),
organization_id INT REFERENCES organizations(id)
);
AutoModel generates:
The generic Error<C> type handles constraint violations gracefully:
Custom Error Type Names with error_type
By default, AutoModel generates error type names based on the query name (e.g., InsertUserConstraints). You can customize this using the error_type configuration option.
Basic Usage:
queries/users/insert_user.sql:
-- @automodel
-- error_type: "UserError" # Custom name instead of InsertUserConstraints
-- @end
INSERT INTO users (email, name, age)
VALUES (#{email}, #{name}, #{age})
RETURNING id
Generated Code:
pub async
Error Type Reuse
Multiple queries that operate on the same table(s) can reuse the same error type. AutoModel validates at build time that the constraints match exactly.
Example:
queries/users/insert_user.sql:
-- @automodel
-- error_type: "UserError" # First query generates the error type
-- @end
INSERT INTO users (email, name, age)
VALUES (#{email}, #{name}, #{age})
RETURNING id
queries/users/update_user_email.sql:
-- @automodel
-- error_type: "UserError" # Reuses UserError - constraints must match
-- @end
UPDATE users SET email = #{email}
WHERE id = #{user_id}
RETURNING id
queries/users/upsert_user.sql:
-- @automodel
-- error_type: "UserError" # Reuses UserError
-- @end
INSERT INTO users (email, name, age)
VALUES (#{email}, #{name}, #{age})
ON CONFLICT (email)
DO UPDATE SET name = EXCLUDED.name, age = EXCLUDED.age
RETURNING id
Build-Time Validation:
AutoModel ensures that when you reuse an error type:
- The referenced error type exists (defined by a previous query)
- The constraints extracted for the current query exactly match the constraints in the reused type
- Both queries reference the same table(s)
Supported PostgreSQL Types
AutoModel supports a comprehensive set of PostgreSQL types with automatic mapping to Rust types. All types support Option<T> for nullable columns.
Boolean & Numeric Types
| PostgreSQL Type | Rust Type |
|---|---|
BOOL |
bool |
CHAR |
i8 |
INT2 (SMALLINT) |
i16 |
INT4 (INTEGER) |
i32 |
INT8 (BIGINT) |
i64 |
FLOAT4 (REAL) |
f32 |
FLOAT8 (DOUBLE PRECISION) |
f64 |
NUMERIC, DECIMAL |
rust_decimal::Decimal |
OID, REGPROC, XID, CID |
u32 |
XID8 |
u64 |
TID |
(u32, u32) |
String & Text Types
| PostgreSQL Type | Rust Type |
|---|---|
TEXT |
String |
VARCHAR |
String |
CHAR(n), BPCHAR |
String |
NAME |
String |
XML |
String |
Binary & Bit Types
| PostgreSQL Type | Rust Type |
|---|---|
BYTEA |
Vec<u8> |
BIT, BIT(n) |
bit_vec::BitVec |
VARBIT |
bit_vec::BitVec |
Date & Time Types
| PostgreSQL Type | Rust Type |
|---|---|
DATE |
chrono::NaiveDate |
TIME |
chrono::NaiveTime |
TIMETZ |
sqlx::postgres::types::PgTimeTz |
TIMESTAMP |
chrono::NaiveDateTime |
TIMESTAMPTZ |
chrono::DateTime<chrono::Utc> |
INTERVAL |
sqlx::postgres::types::PgInterval |
Range Types
| PostgreSQL Type | Rust Type |
|---|---|
INT4RANGE |
sqlx::postgres::types::PgRange<i32> |
INT8RANGE |
sqlx::postgres::types::PgRange<i64> |
NUMRANGE |
sqlx::postgres::types::PgRange<rust_decimal::Decimal> |
TSRANGE |
sqlx::postgres::types::PgRange<chrono::NaiveDateTime> |
TSTZRANGE |
sqlx::postgres::types::PgRange<chrono::DateTime<chrono::Utc>> |
DATERANGE |
sqlx::postgres::types::PgRange<chrono::NaiveDate> |
Multirange Types
| PostgreSQL Type | Rust Type |
|---|---|
INT4MULTIRANGE |
serde_json::Value |
INT8MULTIRANGE |
serde_json::Value |
NUMMULTIRANGE |
serde_json::Value |
TSMULTIRANGE |
serde_json::Value |
TSTZMULTIRANGE |
serde_json::Value |
DATEMULTIRANGE |
serde_json::Value |
Network & Address Types
| PostgreSQL Type | Rust Type |
|---|---|
INET |
std::net::IpAddr |
CIDR |
std::net::IpAddr |
MACADDR |
mac_address::MacAddress |
Geometric Types
| PostgreSQL Type | Rust Type |
|---|---|
POINT |
sqlx::postgres::types::PgPoint |
LINE |
sqlx::postgres::types::PgLine |
LSEG |
sqlx::postgres::types::PgLseg |
BOX |
sqlx::postgres::types::PgBox |
PATH |
sqlx::postgres::types::PgPath |
POLYGON |
sqlx::postgres::types::PgPolygon |
CIRCLE |
sqlx::postgres::types::PgCircle |
JSON & Special Types
| PostgreSQL Type | Rust Type |
|---|---|
JSON |
serde_json::Value |
JSONB |
serde_json::Value |
JSONPATH |
String |
UUID |
uuid::Uuid |
Array Types
All types support PostgreSQL arrays with automatic mapping to Vec<T>:
| PostgreSQL Array Type | Rust Type |
|---|---|
BOOL[] |
Vec<bool> |
INT2[], INT4[], INT8[] |
Vec<i16>, Vec<i32>, Vec<i64> |
FLOAT4[], FLOAT8[] |
Vec<f32>, Vec<f64> |
TEXT[], VARCHAR[] |
Vec<String> |
BYTEA[] |
Vec<Vec<u8>> |
UUID[] |
Vec<uuid::Uuid> |
DATE[], TIMESTAMP[], TIMESTAMPTZ[] |
Vec<chrono::NaiveDate>, Vec<chrono::NaiveDateTime>, Vec<chrono::DateTime<chrono::Utc>> |
INT4RANGE[], DATERANGE[], etc. |
Vec<sqlx::postgres::types::PgRange<T>> |
| And many more... | See type mapping table above |
Full-Text Search & System Types
| PostgreSQL Type | Rust Type |
|---|---|
TSQUERY |
String |
REGCONFIG, REGDICTIONARY, REGNAMESPACE, REGROLE, REGCOLLATION |
u32 |
PG_LSN |
u64 |
ACLITEM |
String |
Custom Enum Types
PostgreSQL custom enums are automatically detected and mapped to generated Rust enums with proper encoding/decoding support. See the Configuration Options section for details on enum handling.
Disabling Formatting of Generated Code
AutoModel emits a // @generated marker in the first few lines of every generated file. To prevent rustfmt from reformatting generated code, add this to your workspace rustfmt.toml:
= false
When this option is set, rustfmt skips any file that contains @generated in its first five lines. See the rustfmt documentation for details.
Advanced Guides
- Composite Types vs JSONB — choosing between PostgreSQL composite types and JSONB columns, with side-by-side comparisons of insert/batch insert/select, backward & forward compatibility analysis, and best practices for schema evolution without downtime.
Requirements
- PostgreSQL database (for actual code generation)
- Rust 1.70+
- tokio runtime
License
MIT License - see LICENSE file for details.