rdbi
A Rust database interface built on mysql_async with derive macros for easy row mapping.
Installation
Check the latest versions on crates.io: rdbi, rdbi-codegen
[]
= "0.1"
That's it for most users. For TLS connections (required by most cloud database providers), enable a TLS feature:
[]
= { = "0.1", = ["rustls-tls"] }
| Feature | Backend | Notes |
|---|---|---|
rustls-tls |
Rustls (pure Rust, recommended) | No system dependencies, works everywhere |
native-tls |
OS native (OpenSSL/Secure Transport/SChannel) | Use when you need the OS certificate store |
If you want automatic code generation from SQL schemas, also add:
[]
= "0.1"
Quick Start
Manual Usage (No Code Generation)
use ;
// Define your struct with FromRow derive
async
With Code Generation (Recommended for Large Schemas)
Generate structs and DAO methods automatically from your SQL schema.
1. Add schema file (schema.sql):
(
id BIGINT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(255) NOT NULL UNIQUE,
email VARCHAR(255) NOT NULL,
status ENUM('ACTIVE', 'INACTIVE') NOT NULL DEFAULT 'ACTIVE',
INDEX idx_status (status)
);
2. Configure (Cargo.toml):
[]
= "schema.sql"
= "src/generated/models"
= "src/generated/dao"
[]
= "0.1"
[]
= "0.1"
3. Add build script (build.rs):
4. Include generated code (src/main.rs):
use *;
use MySqlPool;
async
Note: The generated code under
src/generated/should be committed to version control. This ensures IDE support works without building, and changes are reviewable in PRs. Runcargo buildto regenerate after schema changes.
If you prefer not to commit generated code, omit the output_*_dir settings. The defaults write to $OUT_DIR, and you use include!():
pub use *;
Connection Pool
MySqlPool implements Clone — cloning is cheap (Arc-backed) and all clones share the same underlying connection pool. No need to wrap in Arc.
use MySqlPool;
// Default pool: min=10, max=100 connections
let pool = new?;
// Custom pool size via builder
let pool = builder
.pool_min
.pool_max
.build?;
// Or via URL parameters
let pool = new?;
// Clone is cheap — share across services
let pool2 = pool.clone;
Builder Options
| Method | Default | Description |
|---|---|---|
pool_min(n) |
10 | Minimum idle connections |
pool_max(n) |
100 | Maximum total connections |
inactive_connection_ttl(d) |
0s | TTL for idle connections above pool_min |
abs_conn_ttl(d) |
None | Absolute TTL for any connection |
Generated DAO Methods
Basic Methods (Always Generated)
| Method | Return | Description |
|---|---|---|
find_all |
Vec<T> |
Fetch all records |
count_all |
i64 |
Count total records |
stream_all |
Vec<T> |
Fetch all (batch-friendly alias) |
Primary Key Methods
| Method | Return | Description |
|---|---|---|
find_by_<pk> |
Option<T> |
Find by primary key |
delete_by_<pk> |
u64 |
Delete by primary key |
Composite PKs generate combined names: find_by_user_id_and_role_id(user_id, role_id)
Insert Methods
| Method | Return | Description |
|---|---|---|
insert |
u64 |
Insert entity, returns last_insert_id |
insert_plain |
u64 |
Insert with individual parameters |
insert_all |
u64 |
Batch insert, returns rows_affected |
Update/Upsert Methods
| Method | Return | When Generated |
|---|---|---|
update |
u64 |
Table has PK + non-PK columns |
update_by_<pk> |
u64 |
Same, with individual parameters |
upsert |
u64 |
Table has PK or unique index |
Index-Aware Query Methods
Methods are generated based on index type (deduplicated by priority):
| Priority | Index Type | Return | Example |
|---|---|---|---|
| 1 | Primary Key | Option<T> |
find_by_id(id) |
| 2 | Unique Index | Option<T> |
find_by_email(email) |
| 3 | Non-Unique Index | Vec<T> |
find_by_status(status) |
| 4 | Foreign Key | Vec<T> |
find_by_user_id(user_id) |
Composite indexes: find_by_user_id_and_device_type(user_id, device_type)
Bulk Query Methods (IN Clause)
For single-column indexes, pluralized bulk methods are generated:
find_by_ids
Composite Enum List Methods
For composite indexes with trailing enum columns:
// Index on (user_id, device_type) where device_type is ENUM
find_by_user_id_and_device_types
Pagination Methods
| Method | Return | Description |
|---|---|---|
find_all_paginated |
Vec<T> |
Paginated query with sorting |
get_paginated_result |
PaginatedResult<T> |
Includes total count, pages, has_next |
Generated helper types: SortDirection, {Table}SortBy, PaginatedResult<T>
Custom Queries
Extend generated DAOs or write standalone queries:
// Add custom method to generated DAO
Transactions
rdbi provides three convenience macros for transactional database operations. No trait imports are needed — just use the macros directly:
| Macro | Default Isolation | Description |
|---|---|---|
in_transaction!(pool, |tx| { ... }) |
Serializable |
Auto-commit on Ok, auto-rollback on Err |
in_transaction_with!(pool, level, |tx| { ... }) |
Caller-specified | Same, with explicit isolation level |
with_connection!(pool, |conn| { ... }) |
N/A | No transaction; each statement auto-commits |
Basic Usage
use IsolationLevel;
// Auto-commit on Ok, auto-rollback on Err (default: Serializable isolation)
let order_id = in_transaction!.await?;
// With explicit isolation level
in_transaction_with!.await?;
// Without transaction - each statement auto-commits independently
with_connection!.await?;
Generic Error Support
The macros accept any error type E where E: From<rdbi::Error>. This means anyhow::Error, custom error enums, or plain rdbi::Error all work with ? naturally:
// Service layer using anyhow — rdbi errors auto-convert, non-DB errors work too
async
// Any code that fails inside the body triggers a rollback.
// Put external calls (HTTP, etc.) inside only when they must succeed atomically
// with the DB writes. Otherwise, call them after the transaction commits.
Explicit Error Type
When the error type can't be inferred (e.g., the body only contains rdbi operations and returns Ok(())), you can specify it explicitly as the second argument:
// Without explicit type, `Ok(())` is ambiguous — both rdbi::Error and anyhow::Error match
in_transaction!.await?;
// Also works with in_transaction_with!
in_transaction_with!.await?;
Non-'static References
Unlike the Transactional trait methods (which require 'static closures), the macros use inline async blocks. This means captured &str and other non-'static references work without cloning:
async
Manual Transaction Control
For cases where you need explicit begin/commit/rollback:
use Transactional;
let tx = pool.begin.await?;
insert.await?;
insert.await?;
tx.commit.await?; // or tx.rollback().await?
Isolation Levels: ReadUncommitted, ReadCommitted, RepeatableRead, Serializable (default)
Derive Attributes
Type Mapping
| MySQL | Rust |
|---|---|
| BIGINT | i64 |
| INT | i32 |
| VARCHAR, TEXT | String |
| BOOLEAN, TINYINT(1) | bool |
| DECIMAL | rust_decimal::Decimal |
| DATETIME, TIMESTAMP | chrono::NaiveDateTime |
| DATE | chrono::NaiveDate |
| TIME | chrono::NaiveTime |
| ENUM | Generated enum |
| BLOB, BINARY | Vec |
| JSON | serde_json::Value |
Nullable columns → Option<T>
CLI Usage
# Install
# Generate code
# Preview without writing
# Inspect parsed schema
Configuration Options
For build.rs via Cargo.toml:
[]
= "schema.sql"
= "src/generated/models" # Default: $OUT_DIR/models
= "src/generated/dao" # Default: $OUT_DIR/dao
= ["users", "orders"] # Only these tables
= ["migrations"] # Skip these tables
= true
= true
Or create rdbi-codegen.toml for CLI usage.
Contributing
Commit Messages
This project uses conventional commits with release-please for automated releases:
feat: add connection pooling— new feature (bumps minor version)fix: handle timeout correctly— bug fix (bumps patch version)feat!: redesign Pool trait— breaking change (bumps major version)chore:,docs:,refactor:— no version bump
Setup
License
Licensed under the Apache License, Version 2.0 — see LICENSE for details.
This project also supports the Anti-996 License. We encourage fair labor practices and oppose the "996" working schedule.