dbkit-rs
Reusable Postgres + DuckDB database infrastructure for Rust applications.
Features
- Connection pooling —
ConnectionManagerwraps deadpool-postgres with auto-create database, configurable pool size, and timeouts - Configurable —
DbkitConfigbuilder with connection string construction, SSL modes, and pool tuning - Unified query executor —
BaseHandlerfor Postgres writes (WriteOp) and optional DuckDB analytical reads (ReadOp) - Arrow support —
execute_arrow()returnsVec<RecordBatch>for ML/analytics pipelines - PG→DuckDB sync —
sync_tables()andsync_table_filtered()copy Postgres data into local DuckDB for fast analytical reads - Migration tracking —
InitializationHandlerwith named migrations tracked by content hash - Concurrent cache — Generic DashMap-based key-value cache with named buckets (keys and values default to
String) - Unicode normalization —
BaseHandler::normalize_name()for consistent name matching via NFD decomposition - Optional DuckDB — behind a
duckdbfeature flag to avoid the heavy bundled build when not needed
Usage
use ;
use Arc;
// Connect with defaults
let conn = new.await?;
// Or use the config builder
let config = builder
.host
.database
.user
.password
.pool_size
.build;
let conn = connect.await?;
let pool = new;
// Run tracked migrations
let init = new;
init.run_named_migration.await?;
// Query
let handler = new;
Pool health
let status = conn.pool_status;
println!;
println!;
DuckDB reads (optional)
Enable the duckdb feature:
= { = "0.2", = ["duckdb"] }
Standard mapped reads
use ;
let handler = with_duckdb?;
let result = handler.execute_read.await?;
Arrow reads
Returns Vec<RecordBatch> directly — ideal for ML training pipelines or columnar analytics:
use RecordBatch;
let batches = handler.execute_arrow.await?;
Optional parameters
Use Opt* variants when values may be NULL:
let params = vec!;
Syncing tables from Postgres to DuckDB
Copy full tables or filtered subsets into DuckDB local memory for fast analytical queries:
// Sync entire tables
handler.sync_tables.await?;
// Sync with a filter
handler.sync_table_filtered.await?;
// Now query the local copy (memory.main.orders) for fast reads
let batches = handler.execute_arrow.await?;
Name normalization
Unicode NFD decomposition + lowercase for consistent name matching:
let normalized = normalize_name;
assert_eq!;
Cache
Both key and value types are generic, defaulting to String:
use Cache;
// Default String/String cache — works exactly like before
let cache: Cache = with_buckets;
cache.set;
let val = cache.get;
// Typed values: String keys, i32 values
let counts: = with_buckets;
counts.set;
assert_eq!;
// Typed keys and values: i32 keys, custom struct values
let users: = new;
users.set;
let user = users.get.unwrap;
Feature flags
| Feature | Default | Description |
|---|---|---|
duckdb |
off | Enables DuckDB reads, Arrow support, and PG→DuckDB sync via BaseHandler::with_duckdb() |
License
MIT