resolute
Compile-time checked PostgreSQL queries for Rust with binary-format performance.
resolute validates SQL against a live database at compile time (or offline via cached metadata), generates typed result structs, and executes queries using PostgreSQL's binary wire format.
Features
- 7 query macros:
query!,query_as!,query_scalar!,query_file!,query_file_as!,query_file_scalar!,query_unchecked! - Named parameters:
:namesyntax in both macros and runtime API (not available in sqlx) Executortrait: Write generic functions that work with Client, Transaction, or Pool. No sqlx lifetime gymnastics.atomic()with savepoint nesting: Auto-BEGIN on Client, auto-SAVEPOINT on Transaction. Same function, correct behavior in any context.- Custom PG types:
#[derive(PgEnum)],#[derive(PgComposite)],#[derive(PgDomain)] - Integer-backed enums:
#[repr(i32)]on PgEnum for integer column storage - Domain type arrays:
PgDomainnewtypes inherit array OIDs from their inner type - Query type overrides:
"col: CustomType"syntax in query macros for custom type mapping - Rich FromRow derive:
skip,default,json,try_from,flattenattributes - Generic arrays:
Vec<T>for all Encode/Decode types (bool, i16, i32, i64, f32, f64, String, UUID, chrono types, JSON, numeric, inet) - Pool lifecycle hooks:
before_acquire,on_create,on_checkout,on_checkin,after_release,on_destroy - Offline builds:
.resolute/cache +resolute-cli preparefor CI/Docker - Connection pooling:
ExclusivePoolwith typed checkout - LISTEN/NOTIFY:
PgListenerfor real-time notifications - Migrations: Embedded runner + CLI (create, run, revert, status, info, validate, seed)
- Database lifecycle:
resolute-cli database create/drop - Nullable detection: Automatic
Option<T>for nullable columns viapg_attributeintrospection - 2-5x faster than sqlx: Binary encode is 4-5x faster, query latency 2.3-2.5x faster (benchmarked)
Quick start
use ;
async
Named parameters
Use :name instead of $1, $2, .... Duplicates reuse the same positional slot. :: casts, string literals, and comments are handled correctly.
// Compile-time macro (recommended when the SQL is static):
let row = query!.fetch_one.await?;
// Duplicates: :id appears twice, bound once:
let row = query!.fetch_one.await?;
For runtime queries (dynamic SQL, no compile-time check), see Runtime query styles below.
Runtime query styles
Two ways to run a query at runtime against an &impl Executor (Client, Transaction, or pool handle). Pick whichever reads best for your call site.
Fluent builder
use sql;
// Positional:
let rows = sql
.bind
.bind
.fetch_all
.await?;
// Named:
let rows = sql
.bind_named
.bind_named
.fetch_all
.await?;
// Other terminators: .fetch_one, .fetch_opt, .execute
bind and bind_named take values by value (T: SqlParam + Send + 'static). Values that do not implement SqlParam fail to compile. Mixing bind and bind_named on the same chain panics: pick one style per query.
Raw slice
Fully explicit and the lowest ceremony one-liner. Rust coerces &T to &dyn SqlParam at the slice-literal site when the target type is known from the function signature, so no explicit as &dyn SqlParam cast is needed.
let rows = client
.query
.await?;
let rows = client
.query_named
.await?;
If inference ever struggles (generic code, empty slices, Option::None in the mix), write the coercion out explicitly: &x as &dyn SqlParam.
Query type overrides
Use "column_name: RustType" syntax in SELECT aliases to override the inferred Rust type in query! and query_scalar! macros. This is useful for mapping columns to custom newtypes:
;
// Without override: id would be inferred as i32
// With override: id field is typed as UserId
let row = query!
.fetch_one
.await?;
// row.id is UserId, not i32
let user_id: UserId = row.id;
Type overrides work with nullable columns too. If the column is nullable, the field becomes Option<UserId>.
PostgreSQL casts (::) work normally and are not affected. SELECT created_at::text is a cast, not a type override. The override syntax uses a single : inside a quoted alias.
Executor trait: generic over Client, Transaction, and Pool
Write functions once with &impl Executor. They work everywhere: no sqlx lifetime gymnastics, no consuming self, multiple queries on the same generic executor.
use Executor;
async
// All of these work:
create_user.await?;
create_user.await?;
create_user.await?;
Transactions
Manual transactions
let txn = client.begin.await?;
create_user.await?;
create_profile.await?;
txn.commit.await?;
Closure-based transactions
client.with_transaction.await?; // auto-commit on Ok, auto-rollback on Err
atomic(): context-aware atomicity
Write functions that always run atomically, regardless of whether the caller already has a transaction:
async
// Called with Client → uses BEGIN/COMMIT:
transfer.await?;
// Called inside a transaction → uses SAVEPOINT (nested, composable):
let txn = client.begin.await?;
transfer.await?; // SAVEPOINT, not a nested BEGIN
other_work.await?;
txn.commit.await?;
Custom PostgreSQL types
String enums
Map to PostgreSQL CREATE TYPE ... AS ENUM types:
// default
Supported rename_all strategies: snake_case, lowercase, UPPERCASE, SCREAMING_SNAKE_CASE, camelCase, PascalCase, kebab-case.
Integer-backed enums
Store enum values as integers in PostgreSQL (int2, int4, or int8 columns):
// Encodes as int4 (4 bytes, big-endian). PgType::OID = 23, ARRAY_OID = 1007.
let mut buf = new;
Active.encode; // encodes as 1 (i32)
// Decodes from binary or text:
let decoded = decode?; // from binary int4
let decoded = decode_text?; // from text → Inactive
Supported repr types: #[repr(i16)] (int2), #[repr(i32)] (int4), #[repr(i64)] (int8). All variants must have explicit discriminants. Negative values are supported.
Design note: sqlx allows #[sqlx(transparent)] on #[repr(i32)] enums without explicit discriminants, relying on Rust's auto-incrementing discriminant behavior. Resolute requires explicit discriminants intentionally. Implicit discriminants are fragile (reordering variants silently changes database values), and the explicitness makes the database mapping unambiguous and auditable.
Composite types
Domain types (newtypes)
Transparent wrappers over base PostgreSQL types. All encoding/decoding delegates to the inner type:
;
;
Domain types automatically inherit the array OID from their inner type, so PostgreSQL knows how to handle them in array context:
use PgType;
// Email wraps String (text) → ARRAY_OID = 1009 (text[])
assert_eq!;
// UserId wraps i64 (int8) → ARRAY_OID = 1016 (int8[])
assert_eq!;
FromRow derive
Basic usage with rename and nullable fields:
FromRow attributes
skip: ignore field, use Default::default()
default: fall back to Default::default() if column is missing or NULL
json: deserialize a JSON/JSONB column via serde
try_from: decode as one type, convert via TryFrom
;
flatten: embed a nested FromRow struct
flatten shares the same row. The nested struct's column names must not conflict with the outer struct's columns.
Array types
All types with Encode + Decode support generic Vec<T> arrays:
let tags: = vec!;
let rows = client.query.await?;
let result: = rows.get?;
Supported: Vec<bool>, Vec<i16>, Vec<i32>, Vec<i64>, Vec<f32>, Vec<f64>, Vec<String>, Vec<uuid::Uuid>, Vec<chrono::NaiveDate>, Vec<chrono::NaiveTime>, Vec<chrono::NaiveDateTime>, Vec<chrono::DateTime<Utc>>, Vec<serde_json::Value>, Vec<PgNumeric>, Vec<PgInet>.
Connection pool
let pool = connect.await?;
let client = pool.get.await?;
let rows = client.query.await?;
// Named params work through the pool too:
let user_id: i32 = 1;
let rows = client.query_named.await?;
Pool lifecycle hooks
Customize pool behavior with lifecycle hooks. Connection-aware hooks receive a &C reference:
use LifecycleHooks;
use ;
use Arc;
let checkout_count = new;
let cc = clone;
let release_count = new;
let rc = clone;
let hooks = LifecycleHooks ;
let pool = new.await?;
| Hook | Parameter | When |
|---|---|---|
before_acquire |
none | Before checkout starts |
on_create |
&C |
After a new connection is created |
on_checkout |
&C |
When a connection is handed to the caller |
on_checkin |
&C |
When a connection passes health checks on return |
after_release |
none | After a connection is fully released (all paths) |
on_destroy |
none | When a connection is destroyed (expired/invalid/drain) |
Streaming queries
Process large result sets row-by-row without buffering:
use StreamExt;
let mut stream = client.query_stream.await?;
while let Some = stream.next.await
Timeouts and cancellation
use Duration;
// Auto-cancel via CancelRequest if timeout exceeded:
let rows = client.query_timeout.await;
// Manual cancellation from another task:
let token = client.cancel_token;
spawn;
Pipelining
Batch multiple queries in one network round-trip:
let results = client.pipeline
.query
.execute
.query
.run
.await?;
Bulk data loading (COPY)
// COPY IN: bulk import from CSV
let csv = b"1,Alice\n2,Bob\n";
let count = client.copy_in.await?;
// COPY OUT: bulk export
let data = client.copy_out.await?;
Auto-reconnecting client
use ReconnectingClient;
let client = new.await?;
// Queries auto-reconnect if the connection drops:
let rows = client.query.await?;
Retry policy
use RetryPolicy;
use Duration;
let policy = new;
let rows = policy.execute.await?;
Infinity handling
PostgreSQL supports 'infinity' and '-infinity' for dates and timestamps. Use PgTimestamp and PgDate instead of chrono types when your data may contain these:
let rows = client.query.await?;
let ts: PgTimestamp = rows.get?;
assert_eq!;
Pool warm-up and metrics
let pool = connect.await?;
pool.warm_up.await; // pre-create 5 connections
// Application metrics (Prometheus format):
let output = gather;
Test helper
use TestDb;
let db = create.await?;
let client = db.client.await?;
// ... run tests ...
db.drop_db.await?;
// Or use the attribute macro. The macro creates and drops the temp database
// and binds `client: resolute::Client` in scope. Write the test body as if
// `client` were a free variable; the macro injects it.
async
Offline builds
# Populate cache from source files (run with DB available):
# Build without DB (CI/Docker):
RESOLUTE_OFFLINE=true
# Verify cache is up to date:
Migrations
Or embed in your application:
run.await?;
Feature flags
| Feature | Default | Enables |
|---|---|---|
chrono |
yes | NaiveDate, NaiveTime, NaiveDateTime, DateTime<Utc> |
json |
yes | serde_json::Value for JSON/JSONB |
uuid |
yes | uuid::Uuid |
Design decisions
PostgreSQL only. Resolute does not have an Any database abstraction or multi-database support. It is built from the ground up for PostgreSQL: the wire protocol, type system, OID mappings, and query semantics are all PostgreSQL-specific. This is intentional: a single-database library can leverage PostgreSQL features fully (range types, advisory locks, LISTEN/NOTIFY, custom enums, composite types, binary protocol) without lowest-common-denominator abstractions.
Explicit integer enum discriminants. Integer-backed enums require = N on every variant. This prevents silent breakage when variants are reordered or inserted.
OID = 0 for custom types by default. PgEnum, PgComposite, and PgDomain default to OID = 0 (Unspecified), letting PostgreSQL infer the type from context (column type, cast, etc.). For better error messages or explicit type identity, you can provide OIDs via #[pg_type(oid = N, array_oid = N)]:
; // array_oid still inherited from String if not specified
You can discover your custom type OIDs at runtime with client.lookup_type_oids("mood").
Non-consuming Executor. The Executor trait uses &self instead of consuming self. This is a deliberate departure from sqlx, enabling natural multi-query reuse in generic functions without lifetime gymnastics.
Architecture
See ARCHITECTURE.md for the internals: the Executor trait and its implementors, how atomic() dispatches BEGIN vs SAVEPOINT via monomorphisation, how the FromRow derive expands, the string-vs-integer PgEnum split, composite wire format, PgDomain array OID inheritance, and the ReconnectingClient lock-free-read path.