pgmold 0.25.0

PostgreSQL schema-as-code management tool
Documentation

pgmold

PostgreSQL schema-as-code management tool. Define schemas in native PostgreSQL DDL, diff against live databases, plan migrations, and apply them safely.

Features

  • Schema-as-Code: Define PostgreSQL schemas in native SQL DDL files
  • Introspection: Read schema from live PostgreSQL databases
  • Diffing: Compare schemas and generate migration plans
  • Safety: Lint rules prevent destructive operations without explicit flags
  • Drift Detection: Detect schema drift in CI/CD
  • Transactional Apply: All migrations run in a single transaction
  • Partitioned Tables: Full support for PARTITION BY and PARTITION OF syntax

How pgmold Works

┌─────────────────────┐     ┌─────────────────────┐
│   Schema Files      │     │   Live Database     │
│   (Desired State)   │     │   (Current State)   │
└──────────┬──────────┘     └──────────┬──────────┘
           │                           │
           └───────────┬───────────────┘
                       ▼
              ┌─────────────────┐
              │   pgmold diff   │
              │   (compare)     │
              └────────┬────────┘
                       ▼
              ┌─────────────────┐
              │  Generated SQL  │
              │  (only changes) │
              └─────────────────┘

Example:

Your schema file says:

CREATE TABLE users (
  id UUID PRIMARY KEY,
  name TEXT NOT NULL,
  email TEXT NOT NULL,  -- NEW
  created_at TIMESTAMP
);

Database currently has:

CREATE TABLE users (
  id UUID PRIMARY KEY,
  name TEXT NOT NULL,
  created_at TIMESTAMP
);

pgmold generates only the delta:

ALTER TABLE users ADD COLUMN email TEXT NOT NULL;

Installation

cargo install pgmold

For the latest version from source:

cargo install --git https://github.com/fmguerreiro/pgmold

Quick Start

# 1. Create a schema file
cat > schema.sql << 'EOF'
CREATE TABLE users (
    id BIGINT PRIMARY KEY,
    email TEXT NOT NULL,
    created_at TIMESTAMP DEFAULT now()
);
EOF

# 2. See what would change
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb

# 3. Apply the migration
pgmold apply -s sql:schema.sql -d postgres://localhost/mydb

Usage

# Diff two SQL schema files (outputs migration SQL)
pgmold diff --from sql:old.sql --to sql:new.sql

# Generate migration plan
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb

# Generate rollback plan (reverse direction)
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb --reverse

# Apply migrations (with safety checks)
pgmold apply -s sql:schema.sql -d postgres://localhost/mydb

# Apply with destructive operations allowed
pgmold apply -s sql:schema.sql -d postgres://localhost/mydb --allow-destructive

# Dry run (preview SQL without executing)
pgmold apply -s sql:schema.sql -d postgres://localhost/mydb --dry-run

# Lint schema
pgmold lint -s sql:schema.sql

# Detect drift (returns JSON report with exit code 1 if drift detected)
pgmold drift -s sql:schema.sql -d postgres://localhost/mydb -j

Guides

Multi-File Schemas

Organize your schema across multiple files using directories or glob patterns:

# Load all SQL files from a directory (recursive)
pgmold apply -s sql:./schema/ -d postgres://localhost/mydb

# Use glob patterns
pgmold apply -s "sql:schema/**/*.sql" -d postgres://localhost/mydb

# Multiple sources
pgmold apply -s sql:types.sql -s "sql:tables/*.sql" -d postgres://localhost/mydb

Example directory structure:

schema/
├── enums.sql           # CREATE TYPE statements
├── tables/
│   ├── users.sql       # users table + indexes
│   └── posts.sql       # posts table + foreign keys
└── functions/
    └── triggers.sql    # stored procedures

Duplicate definitions across files produce an error with file locations.

Filtering Objects

Filter by name patterns or object types.

Filter by name pattern:

# Include only objects matching patterns
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb \
  --include 'api_*' --include 'users'

# Exclude objects matching patterns
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb \
  --exclude '_*' --exclude 'pg_*'

Filter by object type:

# Only compare tables and functions (ignore extensions, views, triggers, etc.)
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb \
  --include-types tables,functions

# Exclude extensions from comparison
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb \
  --exclude-types extensions

Combine type and name filters:

# Compare only functions matching 'api_*', excluding internal ones
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb \
  --include-types functions \
  --include 'api_*' \
  --exclude '_*'

Filter nested types within tables:

# Compare tables without RLS policies
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb \
  --exclude-types policies

# Compare only table structure (no indexes, constraints, or policies)
pgmold plan -s sql:schema.sql -d postgres://localhost/mydb \
  --exclude-types policies,indexes,foreignkeys,checkconstraints

Available object types:

  • Top-level: extensions, tables, enums, domains, functions, views, triggers, sequences, partitions
  • Nested (within tables): policies, indexes, foreignkeys, checkconstraints

Extension Objects

By default, pgmold excludes objects owned by extensions (e.g., PostGIS functions, pg_trgm operators) from diffs.

# Include extension objects if needed (e.g., for full database dumps)
pgmold dump -d postgres://localhost/mydb --include-extension-objects -o full_schema.sql

Adopting pgmold in an Existing Project

Use pgmold dump to create a baseline from a live database:

# Export current database schema to SQL files
pgmold dump -d postgres://localhost/mydb -o schema/baseline.sql

# For specific schemas only
pgmold dump -d postgres://localhost/mydb --target-schemas public,auth -o schema/baseline.sql

# Split into multiple files by object type
pgmold dump -d postgres://localhost/mydb --split -o schema/

The --split option creates separate files for extensions, types, sequences, tables, functions, views, triggers, and policies.

After this, your schema files match the database exactly and pgmold plan shows zero operations.

Workflow After Baseline

  1. Make changes by editing the SQL schema files
  2. Preview with pgmold plan -s sql:schema/ -d postgres://localhost/mydb
  3. Apply with pgmold apply -s sql:schema/ -d postgres://localhost/mydb

Integrating with Existing Migration Systems

pgmold is declarative -- it computes diffs and applies directly. To maintain compatibility with an existing migration system:

# Generate a numbered migration file automatically
pgmold migrate \
  -s sql:schema/ \
  -d postgres://localhost/mydb \
  --migrations ./migrations \
  --name "add_email_column"
# Creates: migrations/0044_add_email_column.sql

# Or manually capture output
pgmold diff --from sql:current.sql --to sql:schema/ > migrations/0044_my_change.sql

The migrate command auto-detects the next migration number. Use pgmold for diffing while keeping your existing migration runner.

CI Integration

pgmold includes a GitHub Action for detecting schema drift in CI/CD pipelines.

GitHub Action Usage

- name: Check for schema drift
  uses: fmguerreiro/pgmold/.github/actions/drift-check@main
  with:
    schema: 'sql:schema/'
    database: ${{ secrets.DATABASE_URL }}
    target-schemas: 'public,auth'
    fail-on-drift: 'true'

Inputs:

  • schema (required): Path to schema SQL file(s), space-separated for multiple.
  • database (required): PostgreSQL connection string.
  • target-schemas (optional): Comma-separated list of schemas to introspect. Default: public.
  • version (optional): pgmold version to install. Default: latest.
  • fail-on-drift (optional): Whether to fail the action if drift is detected. Default: true.

Outputs:

  • has-drift: Whether drift was detected (true/false).
  • expected-fingerprint: Expected schema fingerprint from SQL files.
  • actual-fingerprint: Actual schema fingerprint from database.
  • report: Full JSON drift report.

Example Workflow

See .github/workflows/drift-check-example.yml.example for a complete example. Basic usage:

name: Schema Drift Check

on:
  schedule:
    - cron: '0 8 * * *'  # Daily at 8am UTC
  workflow_dispatch:

jobs:
  drift-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Check for schema drift
        uses: fmguerreiro/pgmold/.github/actions/drift-check@main
        with:
          schema: 'sql:schema/'
          database: ${{ secrets.DATABASE_URL }}

CLI Drift Detection

For local or custom CI environments, use the drift command directly:

# Get JSON report with exit code 1 if drift detected
pgmold drift -s sql:schema/ -d postgres://localhost/mydb -j

# Example output:
# {
#   "has_drift": true,
#   "expected_fingerprint": "abc123...",
#   "actual_fingerprint": "def456...",
#   "differences": [
#     "Table users has extra column in database: last_login TIMESTAMP"
#   ]
# }

Drift detection compares SHA256 fingerprints of normalized schemas. Any difference triggers drift.

Terraform Provider

pgmold is available as a Terraform provider for infrastructure-as-code workflows.

Installation

terraform {
  required_providers {
    pgmold = {
      source  = "fmguerreiro/pgmold"
      version = "~> 0.3"
    }
  }
}

provider "pgmold" {}

Usage

resource "pgmold_schema" "app" {
  schema_file       = "${path.module}/schema.sql"
  database_url      = var.database_url
  allow_destructive = false  # Set true to allow DROP operations
}

Terraform diffs against the live database and applies only necessary migrations on changes.

Attributes

Name Type Required Description
schema_file string yes Path to SQL schema file
database_url string yes PostgreSQL connection URL
target_schemas list(string) no PostgreSQL schemas to manage (default: ["public"])
allow_destructive bool no Allow DROP operations (default: false)

Computed attributes:

  • id - Resource identifier
  • schema_hash - SHA256 hash of schema file
  • applied_at - Timestamp of last migration
  • migration_count - Number of operations applied

Migration Resource

Generate numbered migration files instead of applying directly:

resource "pgmold_migration" "app" {
  schema_file  = "${path.module}/schema.sql"
  database_url = var.database_url
  output_dir   = "${path.module}/migrations"
  prefix       = "V"  # Flyway-style prefix
}

Safety Rules

By default, pgmold blocks destructive operations:

  • DROP TABLE, DROP COLUMN, DROP ENUM require --allow-destructive
  • Type narrowing and SET NOT NULL produce warnings

Set PGMOLD_PROD=1 for production mode, which blocks table drops entirely.

Comparison with Other Tools

vs Declarative Schema-as-Code Tools

These tools share pgmold's approach: define desired state, compute diffs automatically.

Feature pgmold Atlas pg-schema-diff pgschema
Language Rust Go Go Go
Schema Format Native SQL HCL, SQL, ORM Native SQL SQL
Multi-DB Support PostgreSQL ✅ Many PostgreSQL PostgreSQL
Drift Detection
Lock Hazard Warnings
Safety Linting
RLS Policies
Partitioned Tables ?
Cloud Service Atlas Cloud
Library Mode

vs Migration-Based Tools

Traditional tools where you write numbered migration files manually.

Feature pgmold Flyway Liquibase Sqitch
Approach Declarative Versioned Versioned Plan-based
Auto-generates Migrations
Multi-DB Support PostgreSQL ✅ Many ✅ Many ✅ Many
Drift Detection ✅ (preview)
Rollback Scripts Auto (reverse diff) Manual Manual Required
Enterprise Features Teams edition Pro edition

When to Choose pgmold

  • Pure SQL schemas -- no HCL or DSLs to learn
  • PostgreSQL-only projects needing deep PG integration
  • Single binary -- no JVM/Go runtime required
  • CI/CD drift detection
  • Safety-first workflows with destructive operation guardrails
  • RLS policies as first-class citizens

When to Choose Alternatives

Development

# Build
cargo build

# Test
cargo test

# Run integration tests (requires Docker)
cargo test --test integration

License

MIT