pub struct Db {
pub storage: Arc<dyn StorageBackend>,
pub tx: Sender<String>,
pub indexes: Arc<DashMap<String, DashMap<String, DashSet<String>>>>,
pub query_heatmap: Arc<DashMap<String, u32>>,
pub hot_threshold: usize,
pub rate_limit_requests: u32,
pub rate_limit_window: u64,
pub max_body_size: usize,
/* private fields */
}Expand description
The central database handle. Cheap to clone — all clones share the same state.
This struct is the public API of the engine. All database operations go through methods on this struct, which delegate to the operations module.
Fields§
§storage: Arc<dyn StorageBackend>The storage backend — handles persistence to disk or OPFS.
pub so handlers can access it directly if needed (e.g. for compaction).
Arc<dyn StorageBackend> = shared pointer to any type implementing the trait.
tx: Sender<String>Broadcast channel sender for real-time change notifications.
When a document is inserted, updated, or deleted, a JSON event is sent
on this channel. WebSocket handlers subscribe to receive these events.
pub so the WebSocket handler in main.rs can call subscribe().
indexes: Arc<DashMap<String, DashMap<String, DashSet<String>>>>The index store.
Key format: “collection:field” (e.g. “users:role”).
Value: field_value → set of document keys with that value.
e.g. “users:role” → { “admin” → {“u1”}, “user” → {“u2”, “u3”} }
pub so handlers.rs can check for index existence directly.
query_heatmap: Arc<DashMap<String, u32>>Query frequency counter for auto-indexing. Key: “collection:field”. Value: number of times queried. When a field reaches 3 queries, an index is auto-created.
hot_threshold: usizeThe maximum number of documents per collection to keep in RAM (Hot). If a collection exceeds this, older documents are paged out to disk (Cold). Default is 50,000.
rate_limit_requests: u32Max requests per window.
rate_limit_window: u64Window size in seconds.
max_body_size: usizeMaximum request body size in bytes.
Implementations§
Source§impl Db
impl Db
Sourcepub fn open(
path: &str,
sync_mode: bool,
tiered_mode: bool,
hot_threshold: usize,
rate_limit_requests: u32,
rate_limit_window: u64,
max_body_size: usize,
encryption_key: Option<&[u8; 32]>,
) -> Result<Self, DbError>
pub fn open( path: &str, sync_mode: bool, tiered_mode: bool, hot_threshold: usize, rate_limit_requests: u32, rate_limit_window: u64, max_body_size: usize, encryption_key: Option<&[u8; 32]>, ) -> Result<Self, DbError>
Open (or create) a database at the given file path. Only available on native (non-WASM) builds.
sync_mode — if true, use SyncDiskStorage (flush on every write).
if false, use AsyncDiskStorage (flush every 50ms).
Ignored when tiered_mode is true.
tiered_mode — if true, use TieredStorage (hot + cold two-tier backend).
Hot writes go to the active log; cold data is archived and
read via mmap on startup. Best for large datasets (100k+ docs).
Enable with STORAGE_MODE=tiered environment variable.
encryption_key — if Some, wrap the storage in EncryptedStorage.
if None, data is stored in plaintext (not recommended).
Sourcepub fn subscribe(&self) -> Receiver<String>
pub fn subscribe(&self) -> Receiver<String>
Create a new broadcast receiver for real-time change notifications. Each call returns an independent receiver — multiple WebSocket handlers can each subscribe and receive all events independently.
Sourcepub fn get(&self, collection: &str, key: &str) -> Option<Value>
pub fn get(&self, collection: &str, key: &str) -> Option<Value>
Retrieve a single document by key. Returns None if not found.
Sourcepub fn get_all(&self, collection: &str) -> HashMap<String, Value>
pub fn get_all(&self, collection: &str) -> HashMap<String, Value>
Retrieve all documents in a collection as a HashMap.
Sourcepub fn get_batch(
&self,
collection: &str,
keys: Vec<String>,
) -> HashMap<String, Value>
pub fn get_batch( &self, collection: &str, keys: Vec<String>, ) -> HashMap<String, Value>
Retrieve a specific set of documents by their keys.
Sourcepub fn insert_batch(
&self,
collection: &str,
items: Vec<(String, Value)>,
) -> Result<(), DbError>
pub fn insert_batch( &self, collection: &str, items: Vec<(String, Value)>, ) -> Result<(), DbError>
Insert or overwrite multiple documents in one call. Each item is a (key, value) pair. Writes are persisted to storage.
Sourcepub fn update(
&self,
collection: &str,
key: &str,
updates: Value,
) -> Result<bool, DbError>
pub fn update( &self, collection: &str, key: &str, updates: Value, ) -> Result<bool, DbError>
Partially update a document — merges updates into the existing document.
Returns true if the document was found and updated, false if not found.
Sourcepub fn delete(&self, collection: &str, key: &str) -> Result<(), DbError>
pub fn delete(&self, collection: &str, key: &str) -> Result<(), DbError>
Delete a single document by key.
Sourcepub fn delete_batch(
&self,
collection: &str,
keys: Vec<String>,
) -> Result<(), DbError>
pub fn delete_batch( &self, collection: &str, keys: Vec<String>, ) -> Result<(), DbError>
Delete multiple documents by key in one call.
Sourcepub fn delete_collection(&self, collection: &str) -> Result<(), DbError>
pub fn delete_collection(&self, collection: &str) -> Result<(), DbError>
Drop an entire collection — removes all documents and its indexes.
Sourcepub fn track_query(&self, collection: &str, field: &str)
pub fn track_query(&self, collection: &str, field: &str)
Track that field was queried in collection and auto-create an index
if this field has been queried 3 or more times.
Errors are silently ignored — auto-indexing is best-effort.
Sourcepub fn compact(&self) -> Result<(), DbError>
pub fn compact(&self) -> Result<(), DbError>
Compact the log file — rewrite it to contain only the current state.
This removes all dead entries (superseded INSERTs, DELETE tombstones) and writes a binary snapshot for fast next startup.
The compacted log contains:
- One INSERT entry per live document (current value only).
- One INDEX entry per registered index (index data is rebuilt on replay).
Sourcepub fn evict_collection(
&self,
collection: &str,
limit: usize,
) -> Result<usize, DbError>
pub fn evict_collection( &self, collection: &str, limit: usize, ) -> Result<usize, DbError>
Evict documents from RAM to disk for a collection if it exceeds the threshold.
This converts Hot(Value) entries into Cold(RecordPointer) entries.
In this v1, it re-scans the log to find the exact byte offsets for the documents.