moltendb-core 0.3.0-beta.5

MoltenDB core engine โ€” in-memory DashMap storage, WAL persistence, query evaluation. No HTTP, no auth.
Documentation

moltendb-core

๐ŸŒ‹ The Pure Engine Crate

In-memory document store ยท Append-only WAL ยท Query evaluator ยท Analytics (๐Ÿšง WIP)
Zero knowledge of HTTP, auth, JWT, or WASM bindings.

License Rust crates.io


What is this crate?

moltendb-core is the heart of MoltenDB. It contains every piece of logic that is shared between the HTTP server (moltendb-server) and the browser WASM adapter (moltendb-wasm):

  • In-memory store โ€” DashMap-backed document collections, keyed by (collection, key).
  • Append-only WAL โ€” every write is appended to a log file (LogEntry: INSERT, DELETE, DROP, INDEX, ENC). On startup the log is replayed into memory.
  • Storage backends โ€” DiskStorage (sync/async), TieredStorage (hot + cold log), EncryptedStorage (ChaCha20-Poly1305), OpfsStorage (WASM / browser OPFS).
  • Query evaluator โ€” $eq, $ne, $gt, $gte, $lt, $lte, $in, $nin, $contains, $or, $and, field projection (include / exclude), dot-notation for nested fields, joins, sort, count, offset.
  • Analytics engine โ€” COUNT, SUM, AVG, MIN, MAX with optional WHERE filtering. โš ๏ธ Under active development โ€” not ready for production use.
  • Auto-indexing โ€” query_heatmap tracks hot fields and builds indexes automatically.
  • Handler pipeline โ€” process_get, process_set, process_update, process_delete, process_analytics โ€” the single source of truth consumed by both the server and the WASM adapter.
  • Input validation โ€” collection name, key, and field name rules enforced before any operation reaches the engine.

Crate type

[lib]

crate-type = ["rlib"]

moltendb-core compiles to a native rlib. It is not a cdylib โ€” WASM bindings live in the separate moltendb-wasm crate. This keeps the native dependency tree clean (no wasm-bindgen, no web-sys).

WASM-specific code (OpfsStorage, Db::open_wasm) is gated behind #[cfg(target_arch = "wasm32")] and only compiled when the crate is used as a dependency of moltendb-wasm.


Add to your project

[dependencies]

moltendb-core = "0.3.0-beta.5"

Minimal example

use moltendb_core::engine::Db;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Open (or create) a database log file on disk
    let db = Db::open("./my_app.log").await?;

    // Insert a document
    db.set("users", "u1", serde_json::json!({
        "name": "Alice",
        "role": "admin"
    })).await?;

    // Read it back
    let user = db.get("users", "u1").await?;
    println!("{}", user);

    Ok(())
}

Using the handler pipeline (same API as the HTTP server)

use moltendb_core::{engine::Db, handlers};
use serde_json::json;

let db = Db::open("./my_app.log").await?;

let payload = json!({
    "collection": "users",
    "where": { "role": "admin" },
    "fields": ["name", "role"],
    "sort": [{ "field": "name", "order": "asc" }]
});

let (status_code, result) = handlers::process_get::process_get(&db, &payload, 10 * 1024 * 1024);
println!("{} โ€” {}", status_code, result);

Hybrid Bitcask Storage

MoltenDB uses a Hybrid Bitcask-inspired Storage Model. Frequently accessed data is kept in RAM (Hot) as parsed JSON for sub-microsecond reads. Less frequently used data is paged out to disk (Cold) as byte-offsets, freeing up memory. This allows MoltenDB to handle datasets much larger than the available RAM while maintaining high performance for the active working set.

By default, any collection exceeding 50,000 documents will automatically evict the oldest documents to the Cold tier (disk/OPFS). This limit is configurable when opening the database.


Module overview

Module Responsibility
engine Db struct, storage backends, WAL replay, operations
engine::storage DiskStorage, TieredStorage, EncryptedStorage, OpfsStorage
query Query condition evaluation, field projection, joins, sort, pagination
analytics Aggregate functions: COUNT, SUM, AVG, MIN, MAX โ€” โš ๏ธ under development, not ready for use
handlers process_get, process_set, process_update, process_delete, process_analytics
validation Collection / key / field name validation rules

Storage modes

Mode Use case
DiskStorage (sync) Low-latency writes, small datasets
DiskStorage (async) High-throughput writes, background flush
TieredStorage 100k+ documents โ€” separates hot and cold log files
EncryptedStorage At-rest encryption with ChaCha20-Poly1305
OpfsStorage Browser WASM โ€” Origin Private File System

Design constraints

  • No longer limited by RAM. While MoltenDB is "Memory-First," the Hybrid Bitcask model allows it to page out documents to disk while keeping only the keys and offsets in RAM. A 10GB database can now comfortably run on a machine with 512MB of RAM.
  • No HTTP, no auth, no JWT. This crate has zero knowledge of the network layer. It is safe to embed in any Rust application without pulling in Axum, Tokio TLS, or any auth dependency.
  • Single writer, many readers. The DashMap store is safe for concurrent reads. Writes are serialised through the storage backend.

Part of the MoltenDB workspace

MoltenDB/
โ”œโ”€โ”€ moltendb-core/     โ† you are here
โ”œโ”€โ”€ moltendb-wasm/     โ€” browser adapter (wasm-bindgen glue, WorkerDb, OPFS)
โ”œโ”€โ”€ moltendb-auth/     โ€” identity layer (JWT, Argon2, UserStore)
โ””โ”€โ”€ moltendb-server/   โ€” network layer (Axum, TLS, CORS, CLI config)

See the root README for the full architecture overview and feature list.