rdbi
A Rust database interface built on mysql_async with derive macros for easy row mapping.
Installation
Check the latest versions on crates.io: rdbi, rdbi-codegen
[]
= "0.1"
That's it for most users. If you want automatic code generation from SQL schemas, also add:
[]
= "0.1"
Quick Start
Manual Usage (No Code Generation)
use ;
// Define your struct with FromRow derive
async
With Code Generation (Recommended for Large Schemas)
Generate structs and DAO methods automatically from your SQL schema.
1. Add schema file (schema.sql):
(
id BIGINT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(255) NOT NULL UNIQUE,
email VARCHAR(255) NOT NULL,
status ENUM('ACTIVE', 'INACTIVE') NOT NULL DEFAULT 'ACTIVE',
INDEX idx_status (status)
);
2. Configure (Cargo.toml):
[]
= "schema.sql"
= "src/generated/models"
= "src/generated/dao"
= "generated::models"
[]
= "0.1"
[]
= "0.1"
3. Add build script (build.rs):
4. Include generated code (src/main.rs):
use *;
use MySqlPool;
async
Note: The generated code under
src/generated/should be committed to version control. This ensures IDE support works without building, and changes are reviewable in PRs. Runcargo buildto regenerate after schema changes.
If you prefer not to commit generated code, omit the output_*_dir and models_module settings. The defaults write to $OUT_DIR, and you use include!():
pub use *;
Connection Pool
MySqlPool implements Clone — cloning is cheap (Arc-backed) and all clones share the same underlying connection pool. No need to wrap in Arc.
use MySqlPool;
// Default pool: min=10, max=100 connections
let pool = new?;
// Custom pool size via builder
let pool = builder
.pool_min
.pool_max
.build?;
// Or via URL parameters
let pool = new?;
// Clone is cheap — share across services
let pool2 = pool.clone;
Builder Options
| Method | Default | Description |
|---|---|---|
pool_min(n) |
10 | Minimum idle connections |
pool_max(n) |
100 | Maximum total connections |
inactive_connection_ttl(d) |
0s | TTL for idle connections above pool_min |
abs_conn_ttl(d) |
None | Absolute TTL for any connection |
Generated DAO Methods
Basic Methods (Always Generated)
| Method | Return | Description |
|---|---|---|
find_all |
Vec<T> |
Fetch all records |
count_all |
i64 |
Count total records |
stream_all |
Vec<T> |
Fetch all (batch-friendly alias) |
Primary Key Methods
| Method | Return | Description |
|---|---|---|
find_by_<pk> |
Option<T> |
Find by primary key |
delete_by_<pk> |
u64 |
Delete by primary key |
Composite PKs generate combined names: find_by_user_id_and_role_id(user_id, role_id)
Insert Methods
| Method | Return | Description |
|---|---|---|
insert |
u64 |
Insert entity, returns last_insert_id |
insert_plain |
u64 |
Insert with individual parameters |
insert_all |
u64 |
Batch insert, returns rows_affected |
Update/Upsert Methods
| Method | Return | When Generated |
|---|---|---|
update |
u64 |
Table has PK + non-PK columns |
update_by_<pk> |
u64 |
Same, with individual parameters |
upsert |
u64 |
Table has PK or unique index |
Index-Aware Query Methods
Methods are generated based on index type (deduplicated by priority):
| Priority | Index Type | Return | Example |
|---|---|---|---|
| 1 | Primary Key | Option<T> |
find_by_id(id) |
| 2 | Unique Index | Option<T> |
find_by_email(email) |
| 3 | Non-Unique Index | Vec<T> |
find_by_status(status) |
| 4 | Foreign Key | Vec<T> |
find_by_user_id(user_id) |
Composite indexes: find_by_user_id_and_device_type(user_id, device_type)
Bulk Query Methods (IN Clause)
For single-column indexes, pluralized bulk methods are generated:
find_by_ids
Composite Enum List Methods
For composite indexes with trailing enum columns:
// Index on (user_id, device_type) where device_type is ENUM
find_by_user_id_and_device_types
Pagination Methods
| Method | Return | Description |
|---|---|---|
find_all_paginated |
Vec<T> |
Paginated query with sorting |
get_paginated_result |
PaginatedResult<T> |
Includes total count, pages, has_next |
Generated helper types: SortDirection, {Table}SortBy, PaginatedResult<T>
Custom Queries
Extend generated DAOs or write standalone queries:
// Add custom method to generated DAO
Transactions
Execute operations with consistent callback-style API:
use ;
// Without transaction - each statement auto-commits
pool.with_connection.await?;
// With transaction - auto-commit on Ok, auto-rollback on Err
let order_id = pool.in_transaction.await?;
// With custom isolation level
pool.in_transaction_with.await?;
For manual control:
let tx = pool.begin.await?;
insert.await?;
insert.await?;
tx.commit.await?; // or tx.rollback().await?
Isolation Levels: ReadUncommitted, ReadCommitted, RepeatableRead, Serializable (default)
Derive Attributes
Type Mapping
| MySQL | Rust |
|---|---|
| BIGINT | i64 |
| INT | i32 |
| VARCHAR, TEXT | String |
| BOOLEAN, TINYINT(1) | bool |
| DECIMAL | rust_decimal::Decimal |
| DATETIME, TIMESTAMP | chrono::NaiveDateTime |
| DATE | chrono::NaiveDate |
| TIME | chrono::NaiveTime |
| ENUM | Generated enum |
| BLOB, BINARY | Vec |
| JSON | serde_json::Value |
Nullable columns → Option<T>
CLI Usage
# Install
# Generate code
# Preview without writing
# Inspect parsed schema
Configuration Options
For build.rs via Cargo.toml:
[]
= "schema.sql"
= "src/generated/models" # Default: $OUT_DIR/models
= "src/generated/dao" # Default: $OUT_DIR/dao
= "generated::models" # Default: "models"
= ["users", "orders"] # Only these tables
= ["migrations"] # Skip these tables
= true
= true
Or create rdbi-codegen.toml for CLI usage.
Contributing
Commit Messages
This project uses conventional commits with release-please for automated releases:
feat: add connection pooling— new feature (bumps minor version)fix: handle timeout correctly— bug fix (bumps patch version)feat!: redesign Pool trait— breaking change (bumps major version)chore:,docs:,refactor:— no version bump
Setup
License
Licensed under the Apache License, Version 2.0 — see LICENSE for details.
This project also supports the Anti-996 License. We encourage fair labor practices and oppose the "996" working schedule.