ic-sql-migrate
A lightweight database migration library for Internet Computer (ICP) canisters with support for SQLite and Turso databases.
Table of Contents
- Features
- Prerequisites
- Installation
- Deployment Configuration
- Usage
- API Reference
- Migration Best Practices
- How It Works
- Examples
- Differences Between Database Backends
- License
Features
- 🚀 Multi-Database Support: Works with SQLite (via
ic-rusqlite
) and Turso databases - 📦 Compile-Time Embedding: Migration files are embedded into your binary at compile time
- 🔄 Automatic Migration: Tracks and applies migrations automatically on canister init and upgrade
- 🔒 Transactional: All migrations run in transactions for safety
- 🏗️ ICP Native: Designed specifically for Internet Computer canisters
Prerequisites
IMPORTANT: You must enable exactly one database feature (sqlite
or turso
) for this library to work. There is no default feature.
In addition to having the Rust toolchain setup and dfx, you need to install the wasi2ic
tool that replaces WebAssembly System Interface (WASI) specific function calls with their corresponding polyfill implementations. This allows you to run Wasm binaries compiled for wasm32-wasi on the Internet Computer.
Configure dfx.json
You also need to configure your dfx.json
to compile for the wasm32-wasip1
target and use wasi2ic
to process the binary:
Installation
Add to both [dependencies]
and [build-dependencies]
in your Cargo.toml
:
For SQLite support (most common for ICP):
[]
# Note: You MUST specify either "sqlite" or "turso" feature - there is no default
= { = "0.0.4", = ["sqlite"] }
= { = "0.4.2", = ["precompiled"], = false }
= "0.18.7"
[]
= "0.0.4"
For Turso support:
[]
# Note: You MUST specify either "sqlite" or "turso" feature - there is no default
= { = "0.0.4", = ["turso"] }
= "0.1.4"
= "0.18.7"
[]
= "0.0.4"
Important:
- You MUST choose exactly one database feature (
sqlite
orturso
) - The features are mutually exclusive (cannot use both)
- There is no default feature - the library will not work without selecting one
Deployment Configuration
dfx.json Setup
For SQLite support, you need to configure your dfx.json
to compile for the wasm32-wasip1
target and use wasi2ic
to process the binary:
This configuration:
- Compiles your canister for the
wasm32-wasip1
target (required for SQLite) - Uses
wasi2ic
to convert WASI function calls to IC-compatible polyfills - Points dfx to the processed WASM file for deployment
Quick Start
1. Create migration files
Create a migrations
directory in your project root and add SQL files:
migrations/
├── 001_initial.sql
├── 002_add_users.sql
└── 003_add_indexes.sql
Example migration file (migrations/001_initial.sql
):
INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT NOT NULL UNIQUE,
email TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
(
id
2. Set up build.rs
Create a build.rs
file in your project root:
3. Use in your canister
SQLite Example (with ic-rusqlite):
use ;
use ;
// Include all migrations at compile time
static MIGRATIONS: & = include!;
Turso Example:
use ;
use Connection;
use RefCell;
static MIGRATIONS: & = include!;
thread_local!
async
async
async
async
How It Works
-
Build Time: The
list()
function inbuild.rs
scans your migrations directory and generates code to embed all SQL files into your canister binary. -
WASI to IC Conversion: The
wasi2ic
tool converts WASI-specific function calls to IC-compatible polyfills, allowing the WASM binary to run on the Internet Computer. -
Canister Init/Upgrade:
- On
init
: Runs all migrations to set up the database schema - On
post_upgrade
: Runs any new migrations added since the last deployment
- On
-
Migration Tracking: A
_migrations
table is automatically created to track which migrations have been applied, preventing duplicate execution. -
Transaction Safety: All pending migrations run in a single transaction. If any migration fails, all changes are rolled back.
Migration Best Practices
-
Naming Convention: Use sequential numbering like
001_description.sql
,002_description.sql
to ensure correct execution order -
Forward-Only: This library only supports forward migrations (no rollbacks). Plan your schema changes carefully.
-
Idempotent SQL: While migrations are tracked, write idempotent SQL when possible using
IF NOT EXISTS
clauses -
Small Changes: Keep each migration focused on a single logical change
-
Test Locally: Always test migrations using
dfx deploy --local
before mainnet deployment
Examples
Complete working examples are available in the repository:
examples/sqlite
- ICP canister with SQLiteexamples/turso
- ICP canister with Turso
Running the SQLite Example
API Reference
Core Functions
For SQLite
Executes all pending migrations synchronously.
For Turso
pub async
Executes all pending migrations asynchronously.
Build Script Function
Discovers and embeds migration files at compile time. Call this in build.rs
.
Macros
ic_sql_migrate::include!()
Includes all migrations discovered by list()
at compile time.
Types
Migration
Error
Custom error type that wraps database-specific errors and migration failures.
Migration Table Schema
The library automatically creates this table:
TEXT PRIMARY KEY,
applied_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
)
(
id
Troubleshooting
Library doesn't compile / "module not found" errors
You must enable either the sqlite
or turso
feature in your Cargo.toml
. The library has no default features and will not work without explicitly selecting a database backend.
"Both features enabled" error
You can only use one database backend at a time. Remove one of the features.
Migrations not found
Ensure your migrations directory exists and contains .sql
files, and that build.rs
is properly configured.
Migration failures
Check the canister logs with dfx canister logs <canister_name>
for detailed error messages.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Author
Kristofer Lund
Acknowledgments
Built specifically for the Internet Computer ecosystem to provide reliable database migrations for canisters using SQL databases.