pub struct Db {Show 13 fields
pub storage: Arc<dyn StorageBackend>,
pub tx: Sender<String>,
pub indexes: Arc<DashMap<String, DashMap<String, DashSet<String>>>>,
pub query_heatmap: Arc<DashMap<String, u32>>,
pub hot_threshold: usize,
pub rate_limit_requests: u32,
pub rate_limit_window: u64,
pub max_body_size: usize,
pub max_keys_per_request: usize,
pub schemas: Arc<DashMap<String, Arc<(Value, Validator)>>>,
pub post_backup_script: Option<String>,
pub tiered_mode: bool,
pub started_at: Instant,
/* private fields */
}Expand description
The central database handle. Cheap to clone — all clones share the same state.
This struct is the public API of the engine. All database operations go through methods on this struct, which delegate to the operations module.
Fields§
§storage: Arc<dyn StorageBackend>The storage backend — handles persistence to disk or OPFS.
pub so handlers can access it directly if needed (e.g. for compaction).
Arc<dyn StorageBackend> = shared pointer to any type implementing the trait.
tx: Sender<String>Broadcast channel sender for real-time change notifications.
When a document is inserted, updated, or deleted, a JSON event is sent
on this channel. WebSocket handlers subscribe to receive these events.
pub so the WebSocket handler in main.rs can call subscribe().
indexes: Arc<DashMap<String, DashMap<String, DashSet<String>>>>The index store.
Key format: “collection:field” (e.g. “users:role”).
Value: field_value → set of document keys with that value.
e.g. “users:role” → { “admin” → {“u1”}, “user” → {“u2”, “u3”} }
pub so handlers.rs can check for index existence directly.
query_heatmap: Arc<DashMap<String, u32>>Query frequency counter for auto-indexing. Key: “collection:field”. Value: number of times queried. When a field reaches 3 queries, an index is auto-created.
hot_threshold: usizeThe maximum number of documents per collection to keep in RAM (Hot). If a collection exceeds this, older documents are paged out to disk (Cold). Default is 50,000.
rate_limit_requests: u32Max requests per window.
rate_limit_window: u64Window size in seconds.
max_body_size: usizeMaximum request body size in bytes.
max_keys_per_request: usizeMaximum keys allowed per request.
schemas: Arc<DashMap<String, Arc<(Value, Validator)>>>Registered JSON schemas per collection. Key: collection name → Value: (Original JSON, Compiled Validator).
post_backup_script: Option<String>Optional shell command to execute after a successful backup. Supports the {SNAPSHOT_PATH} placeholder.
tiered_mode: boolWhether tiered (hot+cold) storage mode is active.
started_at: InstantTimestamp of when this Db instance was opened, used for uptime calculation.
Implementations§
Source§impl Db
impl Db
Sourcepub fn open(config: DbConfig) -> Result<Self, DbError>
pub fn open(config: DbConfig) -> Result<Self, DbError>
Open (or create) a database at the given file path. Only available on native (non-WASM) builds.
sync_mode — if true, use SyncDiskStorage (flush on every write).
if false, use AsyncDiskStorage (flush every 50ms).
Ignored when tiered_mode is true.
tiered_mode — if true, use TieredStorage (hot + cold two-tier backend).
Hot writes go to the active log; cold data is archived and
read via mmap on startup. Best for large datasets (100k+ docs).
Enable with STORAGE_MODE=tiered environment variable.
encryption_key — if Some, wrap the storage in EncryptedStorage.
if None, data is stored in plaintext (not recommended).
Source§impl Db
impl Db
Sourcepub fn hot_keys_count(&self) -> usize
pub fn hot_keys_count(&self) -> usize
Returns the total number of hot (in-memory) keys across all collections.
Sourcepub fn subscribe(&self) -> Receiver<String>
pub fn subscribe(&self) -> Receiver<String>
Create a new broadcast receiver for real-time change notifications. Each call returns an independent receiver — multiple WebSocket handlers can each subscribe and receive all events independently.
Sourcepub fn get(&self, collection: &str, keys: Vec<String>) -> HashMap<String, Value>
pub fn get(&self, collection: &str, keys: Vec<String>) -> HashMap<String, Value>
Retrieve documents by their keys. Returns a HashMap of found key→value pairs. Missing keys are silently skipped. Pass a single key to retrieve one document.
Sourcepub fn get_all(&self, collection: &str) -> HashMap<String, Value>
pub fn get_all(&self, collection: &str) -> HashMap<String, Value>
Retrieve all documents in a collection as a HashMap.
Sourcepub fn insert(
&self,
collection: &str,
items: Vec<(String, Value)>,
) -> Result<(), DbError>
pub fn insert( &self, collection: &str, items: Vec<(String, Value)>, ) -> Result<(), DbError>
Insert or overwrite multiple documents in one call. Each item is a (key, value) pair. Writes are persisted to storage.
Sourcepub fn update(
&self,
collection: &str,
key: &str,
updates: Value,
) -> Result<bool, DbError>
pub fn update( &self, collection: &str, key: &str, updates: Value, ) -> Result<bool, DbError>
Partially update a document — merges updates into the existing document.
Returns true if the document was found and updated, false if not found.
Sourcepub fn delete(&self, collection: &str, keys: Vec<String>) -> Result<(), DbError>
pub fn delete(&self, collection: &str, keys: Vec<String>) -> Result<(), DbError>
Delete one or more documents by key. Pass a single key to delete one document.
Sourcepub fn delete_collection(&self, collection: &str) -> Result<(), DbError>
pub fn delete_collection(&self, collection: &str) -> Result<(), DbError>
Drop an entire collection — removes all documents and its indexes.
Sourcepub fn track_query(&self, collection: &str, field: &str)
pub fn track_query(&self, collection: &str, field: &str)
Track that field was queried in collection and auto-create an index
if this field has been queried 3 or more times.
Errors are silently ignored — auto-indexing is best-effort.
Sourcepub fn set_schema(&self, collection: &str, schema: Value) -> Result<(), DbError>
pub fn set_schema(&self, collection: &str, schema: Value) -> Result<(), DbError>
Register a JSON schema for a collection. All subsequent writes to this collection must conform to this schema.
Sourcepub fn clear_all(&self)
pub fn clear_all(&self)
Wipe all in-memory state — documents, indexes, and query heatmap. Used by the WASM layer when a browser tab unloads in in-memory mode, so that any tab refresh clears the shared RAM store for all tabs.
Sourcepub fn compact(&self) -> Result<(), DbError>
pub fn compact(&self) -> Result<(), DbError>
Compact the log file — rewrite it to contain only the current state.
This removes all dead entries (superseded INSERTs, DELETE tombstones) and writes a binary snapshot for fast next startup.
The compacted log contains:
- One INSERT entry per live document (current value only).
- One INDEX entry per registered index (index data is rebuilt on replay).
Sourcepub fn evict_collection(
&self,
collection: &str,
limit: usize,
) -> Result<usize, DbError>
pub fn evict_collection( &self, collection: &str, limit: usize, ) -> Result<usize, DbError>
Evict documents from RAM to disk for a collection if it exceeds the threshold.
This converts Hot(Value) entries into Cold(RecordPointer) entries.
In this v1, it re-scans the log to find the exact byte offsets for the documents.
Sourcepub fn recover_to(
storage: &dyn StorageBackend,
to_time: Option<u64>,
to_seq: Option<u64>,
) -> Result<Vec<LogEntry>, DbError>
pub fn recover_to( storage: &dyn StorageBackend, to_time: Option<u64>, to_seq: Option<u64>, ) -> Result<Vec<LogEntry>, DbError>
Recover the database state to a specific point in time or sequence number. Returns the recovered state as a Vec of LogEntries that can be written to a snapshot.
This is a utility function used by the CLI for PITR.