Skip to main content

StorageBackend

Trait StorageBackend 

Source
pub trait StorageBackend: Send + Sync {
    // Required methods
    fn write_entry(&self, entry: &LogEntry) -> Result<(), DbError>;
    fn read_log(&self) -> Result<Vec<LogEntry>, DbError>;
    fn compact(&self, entries: Vec<LogEntry>) -> Result<(), DbError>;
    fn read_at(&self, offset: u64, length: u32) -> Result<Vec<u8>, DbError>;

    // Provided methods
    fn get_size(&self) -> Result<u64, DbError> { ... }
    fn stream_log_into(
        &self,
        f: &mut dyn FnMut(LogEntry, u32),
    ) -> Result<u64, DbError> { ... }
}
Expand description

The core storage abstraction. Implement this trait to add a new storage backend.

All three methods operate on LogEntry — the atomic unit of data in MoltenDB. The engine never writes raw bytes; it always goes through this interface.

Required Methods§

Source

fn write_entry(&self, entry: &LogEntry) -> Result<(), DbError>

Append a single log entry to the persistent store.

This is called on every insert, update, delete, and index creation. Implementations may buffer writes (async) or flush immediately (sync).

Source

fn read_log(&self) -> Result<Vec<LogEntry>, DbError>

Read all log entries from persistent storage into a Vec.

Called on startup to rebuild the in-memory state, and by EncryptedStorage which must decrypt entries before they can be streamed into state. For large databases, prefer stream_log_into which avoids holding the full log in RAM.

Source

fn compact(&self, entries: Vec<LogEntry>) -> Result<(), DbError>

Compact the log by writing only the current state (removing dead entries).

entries is the complete current state of the database — every live document as a single INSERT entry. The implementation should atomically replace the existing log with this minimal set.

Source

fn read_at(&self, offset: u64, length: u32) -> Result<Vec<u8>, DbError>

Read exactly length bytes starting at offset from the log.

This is used to fetch “Cold” documents from the append-only log without loading the entire file into memory.

Provided Methods§

Source

fn get_size(&self) -> Result<u64, DbError>

Return the current size of the persistent log file in bytes.

Used by the WASM worker to implement size-based auto-compaction — the JS side calls get_size after every INSERT batch and compacts if the file exceeds the configured threshold (default: 5 MB).

The default implementation returns 0 (no size information available). OpfsStorage overrides this with a real FileSystemSyncAccessHandle.getSize() call. Native disk backends don’t need this — they use OS-level file metadata instead.

Source

fn stream_log_into( &self, f: &mut dyn FnMut(LogEntry, u32), ) -> Result<u64, DbError>

Stream log entries into state one at a time, without loading the full log into RAM. Implementations may load a binary snapshot first and only replay the delta lines written after the snapshot.

The default implementation falls back to read_log() for backwards compatibility (used by WASM/EncryptedStorage which don’t have snapshots).

Returns the total number of entries processed.

Implementors§

Source§

impl StorageBackend for EncryptedStorage

Implement the StorageBackend trait so EncryptedStorage can be used anywhere a StorageBackend is expected — the rest of the engine doesn’t know or care that encryption is happening.