reflect-db 0.1.0

Reflect database schemas and metadata dynamically, conceptually mirroring SQLAlchemy's MetaData.reflect().
Documentation

reflect-db

reflect-db provides a Rust library that can reflect database schemas and metadata dynamically, conceptually mirroring SQLAlchemy's MetaData.reflect().

It leverages sqlx connections and provides backend-independent reflection using a unified MetaData and Table representation.

Core Features

  • Backend-agnostic: Executor trait implemented for PostgreSQL, MySQL, and SQLite.
  • Rich Metadata Extraction: Extracts schemas, tables, views, columns, nullable status, custom SQL types, default values, primary keys, foreign keys, and indexes.
  • SQLAlchemy Parity: Follows equivalent configuration choices like include_views, only (filter tables by name or predicates), extend_existing, autoload_replace, and resolve_fks.

Usage & SQLite Example

Here is a quick example of reflecting an existing SQLite database using the SqliteExecutor:

use sqlx::SqlitePool;
use reflect_db::{MetaData, ReflectOptions, SqliteExecutor, TableFilter};
use std::sync::Arc;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // 1. Connect to your database
    let pool = SqlitePool::connect("sqlite::memory:").await?;

    // Create a dummy table for demonstration
    sqlx::query(
        "CREATE TABLE users (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            name VARCHAR(255) NOT NULL,
            email TEXT UNIQUE,
            created_at DATETIME DEFAULT CURRENT_TIMESTAMP
        )"
    ).execute(&pool).await?;

    sqlx::query(
        "CREATE TABLE posts (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            user_id INTEGER NOT NULL,
            title TEXT NOT NULL,
            content TEXT,
            FOREIGN KEY (user_id) REFERENCES users (id)
        )"
    ).execute(&pool).await?;

    // 2. Initialize Executor and MetaData
    let executor = SqliteExecutor::new(pool);
    let mut metadata = MetaData::new();

    // 3. Configure Reflection Strategy
    let mut options = ReflectOptions::default();
    options.include_views = true;
    
    // Optional: Only include specific tables
    // options.only = Some(TableFilter::Names(vec!["users".to_string()]));
    
    // Optional: Filter via predicate (e.g. only reflect tables that DO NOT start with 'archive_')
    // options.only = Some(TableFilter::Predicate(Arc::new(|name, _meta| !name.starts_with("archive_"))));

    // 4. Execute Reflection
    metadata.reflect(&executor, &options).await?;

    // 5. Inspect the Results
    println!("Reflected Tables: {:?}", metadata.tables.keys());
    
    if let Some(users_table) = metadata.tables.get("users") {
        println!("Users Table columns:");
        for col in &users_table.columns {
            println!("  - {}: {:?} (Nullable: {}, Default: {:?})", col.name, col.data_type, col.nullable, col.default);
        }
        
        println!("Users Table PK: {:?}", users_table.primary_key);
    }
    
    if let Some(posts_table) = metadata.tables.get("posts") {
        println!("Posts Table FKs: {:?}", posts_table.foreign_keys);
    }

    Ok(())
}

JSON Serialization

Because reflect-db's structural types derive serde::Serialize and serde::Deserialize, you can effortlessly export your entire database schema mappings for communication with external tools or language bindings.

A helper method extract_json() is provided on MetaData:

use reflect_db::{MetaData, ReflectOptions, SqliteExecutor};
use sqlx::SqlitePool;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let pool = SqlitePool::connect("sqlite::memory:").await?;
    let executor = SqliteExecutor::new(pool);
    let mut metadata = MetaData::new();
    
    metadata.reflect(&executor, &ReflectOptions::default()).await?;

    // Extract nicely serialized JSON string representing the exact DB schema!
    let json_schema = metadata.extract_json().unwrap();
    println!("{}", json_schema);

    Ok(())
}

Running Tests

Integration tests require testcontainers running. To test accurately with mock connections, SQLite in-memory, and containerized MySQL/PostgreSQL:

cargo test

For coverage statistics, install cargo-tarpaulin and run the script:

./run_coverage.sh