rdbi 0.2.1

Database abstraction layer built on mysql_async with derive macros
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
# rdbi

[![CI](https://github.com/a4501150/rdbi/actions/workflows/ci.yml/badge.svg)](https://github.com/a4501150/rdbi/actions/workflows/ci.yml)
[![rdbi](https://img.shields.io/crates/v/rdbi.svg?label=rdbi)](https://crates.io/crates/rdbi)
[![rdbi-codegen on crates.io](https://img.shields.io/crates/v/rdbi-codegen.svg?label=rdbi-codegen)](https://crates.io/crates/rdbi-codegen)
[![docs.rs](https://img.shields.io/docsrs/rdbi)](https://docs.rs/rdbi)
[![License](https://img.shields.io/crates/l/rdbi.svg)](LICENSE)

A Rust database interface built on `mysql_async` with derive macros for easy row mapping.

## Installation

Check the latest versions on crates.io: [rdbi](https://crates.io/crates/rdbi), [rdbi-codegen](https://crates.io/crates/rdbi-codegen)

```toml
[dependencies]
rdbi = "0.1"
```

That's it for most users. For **TLS connections** (required by most cloud database providers), enable a TLS feature:

```toml
[dependencies]
rdbi = { version = "0.1", features = ["native-tls"] }
# or
rdbi = { version = "0.1", features = ["rustls-tls"] }
```

| Feature | Backend | Notes |
|---------|---------|-------|
| `native-tls` | OS native (OpenSSL/Secure Transport/SChannel) | Requires system libraries |
| `rustls-tls` | Rustls (pure Rust) | No system dependencies |

If you want automatic code generation from SQL schemas, also add:

```toml
[build-dependencies]
rdbi-codegen = "0.1"
```

## Quick Start

### Manual Usage (No Code Generation)

```rust
use rdbi::{FromRow, Pool, Query, mysql::MySqlPool};

// Define your struct with FromRow derive
#[derive(FromRow)]
pub struct User {
    pub id: i64,
    pub username: String,
    pub email: String,
}

#[tokio::main]
async fn main() -> rdbi::Result<()> {
    // Connect to database
    let pool = MySqlPool::new("mysql://user:pass@localhost/mydb")?;

    // Query with type-safe binding
    let users: Vec<User> = Query::new("SELECT * FROM users WHERE id = ?")
        .bind(42)
        .fetch_all(&pool)
        .await?;

    Ok(())
}
```

### With Code Generation (Recommended for Large Schemas)

Generate structs and DAO methods automatically from your SQL schema.

**1. Add schema file** (`schema.sql`):
```sql
CREATE TABLE users (
    id BIGINT AUTO_INCREMENT PRIMARY KEY,
    username VARCHAR(255) NOT NULL UNIQUE,
    email VARCHAR(255) NOT NULL,
    status ENUM('ACTIVE', 'INACTIVE') NOT NULL DEFAULT 'ACTIVE',
    INDEX idx_status (status)
);
```

**2. Configure** (`Cargo.toml`):
```toml
[package.metadata.rdbi-codegen]
schema_file = "schema.sql"
output_structs_dir = "src/generated/models"
output_dao_dir = "src/generated/dao"

[dependencies]
rdbi = "0.1"

[build-dependencies]
rdbi-codegen = "0.1"
```

**3. Add build script** (`build.rs`):
```rust
fn main() {
    rdbi_codegen::generate_from_cargo_metadata()
        .expect("Failed to generate code");
}
```

**4. Include generated code** (`src/main.rs`):
```rust
mod generated {
    pub mod models;
    pub mod dao;
}

use generated::models::*;
use rdbi::mysql::MySqlPool;

#[tokio::main]
async fn main() -> rdbi::Result<()> {
    let pool = MySqlPool::new("mysql://user:pass@localhost/mydb")?;

    // Use generated DAO methods
    let user = generated::dao::users::find_by_id(&pool, 1).await?;
    let active = generated::dao::users::find_by_status(&pool, UsersStatus::Active).await?;

    Ok(())
}
```

> **Note:** The generated code under `src/generated/` should be committed to version control. This ensures IDE support works without building, and changes are reviewable in PRs. Run `cargo build` to regenerate after schema changes.

<details>
<summary>Alternative: OUT_DIR with include!()</summary>

If you prefer not to commit generated code, omit the `output_*_dir` settings. The defaults write to `$OUT_DIR`, and you use `include!()`:

```rust
pub mod models {
    include!(concat!(env!("OUT_DIR"), "/models/mod.rs"));
}
pub mod dao {
    include!(concat!(env!("OUT_DIR"), "/dao/mod.rs"));
}
pub use models::*;
```

</details>

## Connection Pool

`MySqlPool` implements `Clone` — cloning is cheap (Arc-backed) and all clones share the same underlying connection pool. No need to wrap in `Arc`.

```rust
use rdbi::mysql::MySqlPool;

// Default pool: min=10, max=100 connections
let pool = MySqlPool::new("mysql://user:pass@localhost/mydb")?;

// Custom pool size via builder
let pool = MySqlPool::builder("mysql://user:pass@localhost/mydb")
    .pool_min(5)
    .pool_max(50)
    .build()?;

// Or via URL parameters
let pool = MySqlPool::new("mysql://user:pass@localhost/mydb?pool_min=5&pool_max=50")?;

// Clone is cheap — share across services
let pool2 = pool.clone();
```

### Builder Options

| Method | Default | Description |
|--------|---------|-------------|
| `pool_min(n)` | 10 | Minimum idle connections |
| `pool_max(n)` | 100 | Maximum total connections |
| `inactive_connection_ttl(d)` | 0s | TTL for idle connections above `pool_min` |
| `abs_conn_ttl(d)` | None | Absolute TTL for any connection |

## Generated DAO Methods

### Basic Methods (Always Generated)

| Method | Return | Description |
|--------|--------|-------------|
| `find_all` | `Vec<T>` | Fetch all records |
| `count_all` | `i64` | Count total records |
| `stream_all` | `Vec<T>` | Fetch all (batch-friendly alias) |

### Primary Key Methods

| Method | Return | Description |
|--------|--------|-------------|
| `find_by_<pk>` | `Option<T>` | Find by primary key |
| `delete_by_<pk>` | `u64` | Delete by primary key |

Composite PKs generate combined names: `find_by_user_id_and_role_id(user_id, role_id)`

### Insert Methods

| Method | Return | Description |
|--------|--------|-------------|
| `insert` | `u64` | Insert entity, returns `last_insert_id` |
| `insert_plain` | `u64` | Insert with individual parameters |
| `insert_all` | `u64` | Batch insert, returns `rows_affected` |

### Update/Upsert Methods

| Method | Return | When Generated |
|--------|--------|----------------|
| `update` | `u64` | Table has PK + non-PK columns |
| `update_by_<pk>` | `u64` | Same, with individual parameters |
| `upsert` | `u64` | Table has PK or unique index |

### Index-Aware Query Methods

Methods are generated based on index type (deduplicated by priority):

| Priority | Index Type | Return | Example |
|----------|------------|--------|---------|
| 1 | Primary Key | `Option<T>` | `find_by_id(id)` |
| 2 | Unique Index | `Option<T>` | `find_by_email(email)` |
| 3 | Non-Unique Index | `Vec<T>` | `find_by_status(status)` |
| 4 | Foreign Key | `Vec<T>` | `find_by_user_id(user_id)` |

Composite indexes: `find_by_user_id_and_device_type(user_id, device_type)`

### Bulk Query Methods (IN Clause)

For single-column indexes, pluralized bulk methods are generated:

```rust
find_by_ids(&[i64]) -> Vec<T>
find_by_statuses(&[Status]) -> Vec<T>
```

### Composite Enum List Methods

For composite indexes with trailing enum columns:

```rust
// Index on (user_id, device_type) where device_type is ENUM
find_by_user_id_and_device_types(user_id, &[DeviceType]) -> Vec<T>
```

### Pagination Methods

| Method | Return | Description |
|--------|--------|-------------|
| `find_all_paginated` | `Vec<T>` | Paginated query with sorting |
| `get_paginated_result` | `PaginatedResult<T>` | Includes total count, pages, has_next |

Generated helper types: `SortDirection`, `{Table}SortBy`, `PaginatedResult<T>`

## Custom Queries

Extend generated DAOs or write standalone queries:

```rust
#[derive(rdbi::FromRow)]
pub struct UserStats {
    pub user_id: i64,
    pub order_count: i64,
}

// Add custom method to generated DAO
impl dao::users::UsersDao {
    pub async fn find_with_stats(pool: &impl rdbi::Pool) -> rdbi::Result<Vec<UserStats>> {
        rdbi::Query::new(
            "SELECT u.id as user_id, COUNT(o.id) as order_count
             FROM users u LEFT JOIN orders o ON u.id = o.user_id
             GROUP BY u.id"
        )
        .fetch_all(pool)
        .await
    }
}
```

## Transactions

Execute operations with consistent callback-style API:

```rust
use rdbi::{Transactional, IsolationLevel};

// Without transaction - each statement auto-commits
pool.with_connection(|conn| Box::pin(async move {
    dao::users::insert(conn, &user).await?;
    dao::orders::insert(conn, &order).await?;
    Ok(())
})).await?;

// With transaction - auto-commit on Ok, auto-rollback on Err
let order_id = pool.in_transaction(|tx| Box::pin(async move {
    dao::users::insert(tx, &user).await?;
    dao::orders::insert(tx, &order).await?;
    Ok(order.id)
})).await?;

// With custom isolation level
pool.in_transaction_with(IsolationLevel::ReadCommitted, |tx| Box::pin(async move {
    // Uses ReadCommitted instead of default Serializable
    Ok(())
})).await?;
```

For manual control:

```rust
let tx = pool.begin().await?;
dao::users::insert(&tx, &user).await?;
dao::orders::insert(&tx, &order).await?;
tx.commit().await?; // or tx.rollback().await?
```

**Isolation Levels:** `ReadUncommitted`, `ReadCommitted`, `RepeatableRead`, `Serializable` (default)

## Derive Attributes

```rust
#[derive(rdbi::FromRow, rdbi::ToParams)]
pub struct User {
    #[rdbi(skip_insert)]           // Exclude from INSERT (auto-increment)
    pub id: i64,

    #[rdbi(rename = "user_name")]  // Map to different column name
    pub username: String,

    #[rdbi(skip)]                  // Don't read from DB (use Default)
    pub computed_field: String,
}
```

## Type Mapping

| MySQL | Rust |
|-------|------|
| BIGINT | i64 |
| INT | i32 |
| VARCHAR, TEXT | String |
| BOOLEAN, TINYINT(1) | bool |
| DECIMAL | rust_decimal::Decimal |
| DATETIME, TIMESTAMP | chrono::NaiveDateTime |
| DATE | chrono::NaiveDate |
| TIME | chrono::NaiveTime |
| ENUM | Generated enum |
| BLOB, BINARY | Vec<u8> |
| JSON | serde_json::Value |

Nullable columns → `Option<T>`

## CLI Usage

```bash
# Install
cargo install rdbi-codegen

# Generate code
rdbi-codegen --schema schema.sql --output ./src/generated generate

# Preview without writing
rdbi-codegen --schema schema.sql --output ./src/generated --dry-run generate

# Inspect parsed schema
rdbi-codegen --schema schema.sql inspect
```

## Configuration Options

For `build.rs` via `Cargo.toml`:

```toml
[package.metadata.rdbi-codegen]
schema_file = "schema.sql"
output_structs_dir = "src/generated/models"  # Default: $OUT_DIR/models
output_dao_dir = "src/generated/dao"          # Default: $OUT_DIR/dao
include_tables = ["users", "orders"]          # Only these tables
exclude_tables = ["migrations"]               # Skip these tables
generate_structs = true
generate_dao = true
```

Or create `rdbi-codegen.toml` for CLI usage.

## Contributing

### Commit Messages

This project uses [conventional commits](https://www.conventionalcommits.org/) with [release-please](https://github.com/googleapis/release-please) for automated releases:

- `feat: add connection pooling` — new feature (bumps minor version)
- `fix: handle timeout correctly` — bug fix (bumps patch version)
- `feat!: redesign Pool trait` — breaking change (bumps major version)
- `chore:`, `docs:`, `refactor:` — no version bump

### Setup

```bash
git config core.hooksPath .githooks  # Enable pre-commit fmt/clippy checks
```

## License

Licensed under the Apache License, Version 2.0 — see [LICENSE](LICENSE) for details.

This project also supports the [Anti-996 License](https://github.com/996icu/996.ICU/blob/master/LICENSE). We encourage fair labor practices and oppose the "996" working schedule.