# dbcli
[](https://crates.io/crates/dbcli)
[](https://docs.rs/dbcli)
[](LICENSE)
Automatically convert SQL query results to JSON without struct mapping.
## Features
- **Schema-free** — no struct definitions or `#[derive]` needed; works directly on raw query rows
- **Multi-database** — supports MySQL, PostgreSQL, and SQLite via feature flags (all enabled by default), plus ODBC for Access (.mdb/.accdb) and Excel (.xls/.xlsx)
- **CLI tool** — interactive REPL mode with SQL history, colored table output, and JSON export
- **Vector support** — pgvector (`vector`, `halfvec`, `sparsevec`), MySQL HeatWave / MySQL 9.0+, sqlite-vec
- **Customizable** — override type conversion logic per-column with the `ToJsonCustomizer` trait
- **Performant** — function pointer array built once per query; zero overhead per row during iteration
- **ODBC support** — query Microsoft Access (.mdb/.accdb), Excel (.xls/.xlsx), and any ODBC-compatible data source; auto-detects driver from file extension
## Installation
By default, all features are enabled (CLI + all databases):
```toml
[dependencies]
# Default: all features enabled (CLI + all databases)
dbcli = "0.1"
# Or select specific databases (as library, without CLI)
dbcli = { version = "0.1", default-features = false, features = ["postgres"] }
```
Each feature automatically pulls in the corresponding `sqlx` driver and auxiliary crates:
| `cli` | `clap`, `tokio`, `comfy-table`, `rustyline`, `crossterm` |
| `mysql` | `sqlx/mysql`, `chrono`, `rust_decimal`, `base64`, `encoding_rs` |
| `postgres` | `sqlx/postgres`, `chrono`, `rust_decimal`, `base64`, `encoding_rs` |
| `sqlite` | `sqlx/sqlite`, `base64` |
| `odbc` | `odbc-api`, `base64`, `encoding_rs`, `tokio` (requires system unixODBC library) |
> **Note:** The `odbc` feature is **not** included in the default feature set because it requires the system-level unixODBC library. Enable it explicitly when needed.
## Quick Start
### PostgreSQL
```rust,no_run
use sqlx::PgPool;
use dbcli::to_json::postgres::to_json;
async fn query_to_json(pool: &PgPool) -> anyhow::Result<()> {
let rows = sqlx::query("SELECT id, name, created_at FROM users")
.fetch_all(pool)
.await?;
// data — Vec<serde_json::Value>, one JSON object per row
// columns — Vec<ColumnBaseInfo>, column name / type / index metadata
let (data, columns) = to_json(rows)?;
println!("{}", serde_json::to_string_pretty(&data)?);
Ok(())
}
```
### MySQL
```rust,no_run
use sqlx::MySqlPool;
use dbcli::to_json::mysql::to_json;
async fn query_to_json(pool: &MySqlPool) -> anyhow::Result<()> {
let rows = sqlx::query("SELECT id, name, created_at FROM orders")
.fetch_all(pool)
.await?;
let (data, _columns) = to_json(rows)?;
println!("{}", serde_json::to_string_pretty(&data)?);
Ok(())
}
```
### SQLite
```rust,no_run
use sqlx::SqlitePool;
use dbcli::to_json::sqlite::to_json;
async fn query_to_json(pool: &SqlitePool) -> anyhow::Result<()> {
let rows = sqlx::query("SELECT id, title FROM notes")
.fetch_all(pool)
.await?;
let (data, _columns) = to_json(rows)?;
println!("{}", serde_json::to_string_pretty(&data)?);
Ok(())
}
```
### ODBC (Access / Excel)
```rust,no_run
use dbcli::to_json::odbc::to_json;
use odbc_api::{Environment, ConnectionOptions};
fn query_access() -> anyhow::Result<()> {
let env = Environment::new()?;
let conn = env.connect_with_connection_string(
r"Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\data\mydb.mdb;",
ConnectionOptions::default(),
)?;
if let Some(mut cursor) = conn.execute("SELECT * FROM Users", (), None)? {
let (data, columns) = to_json(&mut cursor)?;
println!("{}", serde_json::to_string_pretty(&data)?);
}
Ok(())
}
```
> ODBC is synchronous; when used in an async context, `execute_raw_sql` wraps calls in `tokio::task::spawn_blocking` automatically.
**Example output:**
```json
[
{ "id": 1, "name": "Alice", "created_at": "2024-01-15 09:30:00" },
{ "id": 2, "name": "Bob", "created_at": "2024-03-22 14:05:11" }
]
```
## CLI Tool
When built with the `cli` feature (included by default), `dbcli` provides a standalone
command-line tool for interactive SQL querying.
### Build & Install
```bash
# Build with all databases (default)
cargo build -p dbcli --release
# Or select specific databases
cargo build -p dbcli --release --no-default-features --features "cli,postgres"
# Build with ODBC support
cargo build -p dbcli --release --no-default-features --features "cli,odbc"
# Build with ODBC + PostgreSQL
cargo build -p dbcli --release --no-default-features --features "cli,odbc,postgres"
```
### Usage
```bash
# Single query
dbcli -u postgres://user:pass@host:5432/db --sql "SELECT * FROM users"
# Interactive REPL mode (omit --sql)
dbcli -u postgres://user:pass@host:5432/db
# Use environment variable
export DATABASE_URL=postgres://user:pass@host:5432/db
dbcli
# JSON output (pipe-friendly)
dbcli -u ... --sql "SELECT * FROM users" --json
# Limit rows in table mode (default: 1000)
dbcli -u ... --sql "SELECT * FROM large_table" -l 100
# Test connection only
dbcli -u postgres://user:pass@host:5432/db --connect
# Query Access database via ODBC
dbcli -u "odbc:///path/to/file.mdb" --sql "SELECT * FROM Users"
# Query Excel spreadsheet via ODBC
dbcli -u "odbc:///path/to/file.xlsx" --sql "SELECT * FROM [Sheet1$]"
# Direct ODBC connection string
dbcli -u "Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=/path/to/file.mdb" --sql "SELECT * FROM Users"
```
### REPL Mode
When no `--sql` argument is provided, `dbcli` enters interactive REPL mode:
- Multi-line SQL input (terminate with `;`)
- Arrow-key history navigation (persisted to `~/.db-json_history`)
- Type `exit`, `quit`, or `\q` to leave
- Ctrl+C clears the current multi-line buffer; Ctrl+D exits immediately
### Parameters
| `--url` | `-u` | Database connection URL (or set `DATABASE_URL` env) |
| `--sql` | `-s` | SQL query to execute (omit for REPL mode) |
| `--connect` | `--conn` | Test connection only |
| `--json` | `-j` (implicit) | Force JSON output |
| `--table` | | Force table output |
| `--limit` | `-l` | Max rows in table mode (default: 1000, 0 = unlimited) |
| `--max-col-width` | `--mcw` | Max column width (default: 0 = auto) |
| `--help` | `-h` | Show help |
| `--version` | `-V` | Show version |
### Output Modes
- **Terminal (TTY)**: Colored table with UTF-8 borders, auto-adapted to terminal width
- **Pipe / redirect**: JSON array output for programmatic use
- Override with `--json` or `--table` flags
## Custom Type Parsing
Implement the `ToJsonCustomizer` trait to override the default conversion for any column type.
Register it **once** at application startup via `set_to_json_customizer`.
```rust,no_run
use dbcli::to_json::{ToJsonCustomizer, set_to_json_customizer};
struct MyCustomizer;
impl ToJsonCustomizer for MyCustomizer {
/// Override MySQL DATETIME format to ISO 8601.
#[cfg(feature = "mysql")]
fn customize_mysql(
&self,
type_name: &str,
) -> Option<fn(&sqlx::mysql::MySqlRow, usize) -> serde_json::Value> {
match type_name {
"DATETIME" => Some(|row, idx| {
use sqlx::Row;
use chrono::NaiveDateTime;
match row.try_get::<Option<NaiveDateTime>, _>(idx) {
Ok(Some(dt)) => serde_json::Value::String(
dt.format("%Y-%m-%dT%H:%M:%S").to_string(),
),
_ => serde_json::Value::Null,
}
}),
_ => None,
}
}
/// Override PostgreSQL TIMESTAMP format.
#[cfg(feature = "postgres")]
fn customize_pg(
&self,
type_name: &str,
) -> Option<fn(&sqlx::postgres::PgRow, usize) -> serde_json::Value> {
match type_name {
"TIMESTAMP" => Some(|row, idx| {
use sqlx::Row;
use chrono::NaiveDateTime;
match row.try_get::<Option<NaiveDateTime>, _>(idx) {
Ok(Some(dt)) => serde_json::Value::String(
dt.format("%Y-%m-%dT%H:%M:%S").to_string(),
),
_ => serde_json::Value::Null,
}
}),
_ => None,
}
}
}
fn main() {
// Register once — subsequent calls are silently ignored (no panic, no overwrite)
set_to_json_customizer(Box::new(MyCustomizer));
}
```
**How it works:**
1. `determine_parsing_methods` calls `customize_mysql` / `customize_pg` / `customize_sqlite` **once per column** when the first row is processed.
2. The returned function pointer (or the built-in default) is cached in a `Vec`.
3. All subsequent rows call the cached pointer directly — no trait dispatch, no branch per row.
Implement only the database methods you need; the other two default to built-in behavior.
## Type Mapping Reference
All `type_name` values passed to `ToJsonCustomizer` are **UPPERCASE** strings.
### MySQL
| VARCHAR / CHAR / TEXT | `"VARCHAR"` / `"CHAR"` / `"TEXT"` | String |
| INT / BIGINT / SMALLINT (and unsigned variants) | `"INT"` / `"BIGINT"` / `"SMALLINT"` | i64 |
| FLOAT / DOUBLE / REAL | `"FLOAT"` / `"DOUBLE"` / `"REAL"` | f64 (NaN/Inf → String) |
| DATETIME | `"DATETIME"` | String `"%Y-%m-%d %H:%M:%S"` |
| DATE | `"DATE"` | String `"%Y-%m-%d"` |
| TIME | `"TIME"` | String `"%H:%M:%S"` (via NaiveTime) |
| TIMESTAMP | `"TIMESTAMP"` | String `"%Y-%m-%d %H:%M:%S %Z"` (DateTime<Utc> → local) |
| DECIMAL / NUMERIC | `"DECIMAL"` / `"NUMERIC"` | String (precision preserved) |
| BLOB / TINYBLOB | `"BLOB"` / `"TINYBLOB"` | String (text) or Base64 (binary) |
| MEDIUMBLOB / LONGBLOB | `"MEDIUMBLOB"` / `"LONGBLOB"` | Base64 String |
| BINARY | `"BINARY"` | UUID String (16-byte) or Base64 String |
| VARBINARY | `"VARBINARY"` | Auto encoding detection (UTF-8/GBK/BIG5) → String |
| JSON | `"JSON"` | serde_json::Value |
| BOOLEAN / BOOL | `"BOOLEAN"` / `"BOOL"` | bool |
| ENUM | `"ENUM"` | String |
| SET | `"SET"` | JSON Array |
| VECTOR (HeatWave / MySQL 9.0+) | `"VECTOR"` | JSON Array (f64 values) |
### PostgreSQL
Raw PG type names are normalized to uppercase by `detect_pg_type`.
| text / varchar / char / bpchar / citext | `"TEXT"` | String |
| int2 / int4 / int8 / smallint / bigint | `"INT"` | i64 |
| float4 / float8 / real | `"FLOAT"` | f64 (via `parse_float_value`, NaN/Inf → String) |
| numeric / decimal | `"NUMERIC"` | String (precision preserved) |
| bool / boolean | `"BOOL"` | bool |
| date | `"DATE"` | String `"%Y-%m-%d"` |
| timestamp without time zone | `"TIMESTAMP"` | String `"%Y-%m-%d %H:%M:%S"` (NaiveDateTime) |
| timestamp with time zone | `"TIMESTAMPTZ"` | String (RFC 3339) |
| time / timetz | `"TIME"` | String `"%H:%M:%S"` (via NaiveTime) |
| jsonb / json | `"JSON"` | serde_json::Value |
| bytea | `"BYTEA"` | String (text) or Base64 (binary) |
| uuid | `"UUID"` | String |
| array types | `"ARRAY"` | JSON Array |
| interval / money / inet / cidr / macaddr / xml | `"TEXT"` | String |
| geometry / geography | `"GEOMETRY"` | String |
| hstore | `"HSTORE"` | String |
| enum types | `"ENUM"` | String (label) |
| range types | `"RANGE"` | JSON Object `{lower, upper, lower_inc, upper_inc}` |
| composite types | `"COMPOSITE"` | String (text representation) |
| domain types | `"DOMAIN"` | Resolved to underlying base type |
| vector / halfvec (pgvector) | `"VECTOR"` | JSON Array (f64 values) |
| sparsevec (pgvector) | `"SPARSEVEC"` | JSON Object `{dimensions, indices, values}` |
| bit / varbit | `"BIT"` | String (binary, e.g. `"10101010"`) |
### SQLite
| TEXT / DATETIME / DATE / TIME | `"TEXT"` | String (auto-parses JSON if starts with `{` or `[`) |
| INTEGER / BOOLEAN | `"INTEGER"` | i64 |
| REAL | `"REAL"` | f64 |
| BLOB | `"BLOB"` | Base64 String |
| NUMERIC | `"NUMERIC"` | Auto-infer: i64 / f64 / String |
| NULL | `"NULL"` | Dynamic inference |
| VECTOR (sqlite-vec) | `"VECTOR"` | JSON Array (f64 values) |
### ODBC
Type mapping is based on the `odbc_api::DataType` enum. Column names from sources without
headers (e.g., Excel without HDR) are auto-generated as `col_1`, `col_2`, etc. Duplicate
names are disambiguated with `_2`, `_3` suffixes.
| Integer / SmallInt / TinyInt / BigInt | `"INTEGER"` | i64 |
| Float / Double / Real | `"FLOAT"` | f64 (NaN/Inf → String) |
| Decimal / Numeric | `"DECIMAL"` | String (precision preserved) |
| Bit | `"BIT"` | bool (Access: -1/1 = true, 0 = false) |
| Char / Varchar / LongVarchar / WChar / WVarchar / WLongVarchar | `"TEXT"` | String (auto encoding: UTF-8 / GBK / BIG5) |
| Date | `"DATE"` | String |
| Time | `"TIME"` | String |
| Timestamp | `"TIMESTAMP"` | String |
| Binary / Varbinary / LongVarbinary | `"BINARY"` | Base64 String |
| Other | `"TEXT"` | String |
## API Reference
### Core functions
| `to_json::mysql::to_json(rows)` | Convert `Vec<MySqlRow>` → `(Vec<Value>, Vec<ColumnBaseInfo>)` |
| `to_json::postgres::to_json(rows)` | Convert `Vec<PgRow>` → `(Vec<Value>, Vec<ColumnBaseInfo>)` |
| `to_json::sqlite::to_json(rows)` | Convert `Vec<SqliteRow>` → `(Vec<Value>, Vec<ColumnBaseInfo>)` |
| `to_json::odbc::to_json(cursor)` | Convert ODBC `Cursor` → `(Vec<Value>, Vec<ColumnBaseInfo>)` |
| `execute::odbc::execute_raw_sql(conn_str, sql)` | Execute SQL via ODBC, returns `Vec<SqlResult>` |
| `to_json::set_to_json_customizer(b)` | Register a global customizer (call once at startup) |
| `to_json::f64_to_json_safe(f)` | Convert `f64` to JSON, mapping NaN/Infinity to strings |
| `to_json::blob_is_text(data)` | Heuristic: detect whether binary data is human-readable text |
### Types
| `to_json::ToJsonCustomizer` | Trait for custom per-column type conversion |
| `column_info::ColumnBaseInfo` | Column metadata: `name`, `type`, `index` |
Full documentation is available on [docs.rs/dbcli](https://docs.rs/dbcli).
## License
MIT