pub struct Database { /* private fields */ }Implementations§
Source§impl Database
impl Database
Sourcepub fn create(path: impl AsRef<Path>) -> Result<Database, DatabaseError>
pub fn create(path: impl AsRef<Path>) -> Result<Database, DatabaseError>
Opens the specified file as a redb database.
- if the file does not exist, or is an empty file, a new database will be initialized in it
- if the file is a valid redb database, it will be opened
- otherwise this function will return an error
Sourcepub fn open(path: impl AsRef<Path>) -> Result<Database, DatabaseError>
pub fn open(path: impl AsRef<Path>) -> Result<Database, DatabaseError>
Opens an existing redb database.
Sourcepub fn backup(&self, path: impl AsRef<Path>) -> Result<(), StorageError>
pub fn backup(&self, path: impl AsRef<Path>) -> Result<(), StorageError>
Creates a consistent backup of the database at the given path.
The backup captures a snapshot of the last committed transaction. This method can be called while other read or write transactions are active – it will not block writers and will not include uncommitted data.
The resulting file is a valid redb database. Open it with Database::open
(recommended, handles any needed repair) or Builder::open.
Sourcepub fn verify_backup(
path: impl AsRef<Path>,
level: VerifyLevel,
) -> Result<VerifyReport, DatabaseError>
pub fn verify_backup( path: impl AsRef<Path>, level: VerifyLevel, ) -> Result<VerifyReport, DatabaseError>
Verifies the integrity of a backup (or any redb database file) without modifying it.
This is a standalone function that does not require an open Database.
The file is opened read-only and is never modified, making it safe to run on
backup files, read-only media, or files in use by another process.
§Verification levels
VerifyLevel::Header: Verifies magic number and commit slot checksums (~instant)VerifyLevel::Pages: Header + walks all B-tree pages verifying XXH3-128 checksumsVerifyLevel::Full: Pages + verifies B-tree structural invariants (key ordering, valid child pointers, consistent tree depth)
Sourcepub fn salvage(
corrupted_path: impl AsRef<Path>,
output_path: impl AsRef<Path>,
) -> Result<SalvageReport, DatabaseError>
pub fn salvage( corrupted_path: impl AsRef<Path>, output_path: impl AsRef<Path>, ) -> Result<SalvageReport, DatabaseError>
Best-effort recovery of data from a corrupted database file.
Opens corrupted_path read-only, walks every discoverable table’s B-tree
(skipping corrupted subtrees), and writes all recoverable key/value pairs
into a fresh database at output_path.
Recovered tables use raw &[u8] key/value types. If the original table
used typed keys (e.g. &str, u64), the caller must re-interpret the raw
bytes after recovery.
Returns a SalvageReport summarising what was recovered and what was lost.
Sourcepub fn verify_integrity(&self, level: VerifyLevel) -> Result<VerifyReport>
pub fn verify_integrity(&self, level: VerifyLevel) -> Result<VerifyReport>
Verifies the integrity of an open database without modifying it.
Unlike check_integrity which repairs the database and
commits changes, this method is purely read-only and returns a detailed report.
It can be called while read or write transactions are active.
§Verification levels
VerifyLevel::Header: Verifies commit slot checksums (~instant)VerifyLevel::Pages: Header + walks all B-tree pages verifying XXH3-128 checksumsVerifyLevel::Full: Pages + verifies B-tree structural invariants
Sourcepub fn check_integrity(&mut self) -> Result<bool, DatabaseError>
pub fn check_integrity(&mut self) -> Result<bool, DatabaseError>
Force a check of the integrity of the database file, and repair it if possible.
Note: Calling this function is unnecessary during normal operation. redb will automatically detect and recover from crashes, power loss, and other unclean shutdowns. This function is quite slow and should only be used when you suspect the database file may have been modified externally to redb, or that a redb bug may have left the database in a corrupted state.
Returns Ok(true) if the database passed integrity checks; Ok(false) if it failed but was repaired,
and Err(Corrupted) if the check failed and the file could not be repaired
Sourcepub fn compact(&mut self) -> Result<bool, CompactionError>
pub fn compact(&mut self) -> Result<bool, CompactionError>
Compacts the database file
Returns true if compaction was performed, and false if no futher compaction was possible
Sourcepub fn compact_blobs(&mut self) -> Result<BlobCompactionReport, CompactionError>
pub fn compact_blobs(&mut self) -> Result<BlobCompactionReport, CompactionError>
Compacts the blob region, removing dead space left by deleted blobs.
Uses a two-pass crash-safe algorithm:
- Appends live blobs after the current region end, updates all offsets, commits.
- Shifts live data to the start of the region, updates offsets, commits.
- Truncates the file.
Same constraints as compact(): no active read transactions
or persistent/ephemeral savepoints.
Sourcepub fn should_compact_blobs(
&self,
) -> Result<Option<BlobStats>, TransactionError>
pub fn should_compact_blobs( &self, ) -> Result<Option<BlobStats>, TransactionError>
Checks blob region statistics against the configured
BlobCompactionPolicy and returns Some(stats) if compaction is
recommended, or None if the region is healthy.
This is purely advisory – the database never auto-compacts.
Sourcepub fn compact_blobs_with_progress(
&mut self,
callback: impl FnMut(u64, u64, u64, u64) -> bool,
) -> Result<BlobCompactionReport, CompactionError>
pub fn compact_blobs_with_progress( &mut self, callback: impl FnMut(u64, u64, u64, u64) -> bool, ) -> Result<BlobCompactionReport, CompactionError>
Like compact_blobs(), but invokes callback
after each pass with (blobs_processed, total_blobs, bytes_processed, total_bytes). Return false from the callback to cancel compaction.
Same constraints as compact_blobs: no active read transactions or
persistent/ephemeral savepoints.
Sourcepub fn start_compaction(&self) -> Result<CompactionHandle<'_>, CompactionError>
pub fn start_compaction(&self) -> Result<CompactionHandle<'_>, CompactionError>
Starts an incremental online compaction that allows concurrent readers.
Unlike compact() which requires &mut self and blocks
all readers, this method takes &self and performs compaction in small steps.
Each step briefly acquires a write lock, relocates a batch of pages, and releases
it – allowing read transactions to proceed between steps.
Persistent and ephemeral savepoints are not allowed during compaction because they pin old page versions indefinitely.
§Example
let db = Database::create("my.db").unwrap();
let handle = db.start_compaction().unwrap();
let total = handle.run().unwrap();
println!("Relocated {} pages", total);Sourcepub fn start_blob_compaction(
&self,
) -> Result<BlobCompactionHandle<'_>, CompactionError>
pub fn start_blob_compaction( &self, ) -> Result<BlobCompactionHandle<'_>, CompactionError>
Starts online blob compaction that allows concurrent readers between phases.
Unlike compact_blobs(), this takes &self
and splits the work into two phases. Between phases, read transactions
can proceed normally.
No active read transactions may exist when the handle is created. Persistent and ephemeral savepoints are not allowed.
Sourcepub fn start_integrity_scanner(
&self,
config: IntegrityScannerConfig,
) -> Result<IntegrityScannerHandle, DatabaseError>
pub fn start_integrity_scanner( &self, config: IntegrityScannerConfig, ) -> Result<IntegrityScannerHandle, DatabaseError>
Starts a background integrity scanner that periodically walks all B-tree pages and checks xxh3-128 checksums.
The scanner runs on a dedicated thread and never blocks normal read/write traffic. Results are available via the returned handle.
The thread is automatically stopped when the handle is dropped.
Sourcepub fn export_incremental(
&self,
since_txn: u64,
) -> Result<IncrementalSnapshot, StorageError>
pub fn export_incremental( &self, since_txn: u64, ) -> Result<IncrementalSnapshot, StorageError>
Export a logical incremental snapshot of all key/value changes since
since_txn.
Requires Builder::set_history_retention() > 0 and since_txn must
still be within the retention window.
Sourcepub fn import_incremental(
&self,
snapshot: &IncrementalSnapshot,
) -> Result<IncrementalImportReport, StorageError>
pub fn import_incremental( &self, snapshot: &IncrementalSnapshot, ) -> Result<IncrementalImportReport, StorageError>
Apply an incremental snapshot to this database.
Performs a logical replay: upserted entries are inserted, deleted entries are removed, and dropped tables are deleted. Executes as a single atomic write transaction.
Sourcepub fn backup_incremental(
&self,
dest: impl AsRef<Path>,
since_txn: u64,
) -> Result<IncrementalBackupReport, StorageError>
pub fn backup_incremental( &self, dest: impl AsRef<Path>, since_txn: u64, ) -> Result<IncrementalBackupReport, StorageError>
Write an incremental delta file containing only changes since since_txn.
The destination file is a portable delta (not a valid database file)
that can be applied with apply_incremental_backup().
Sourcepub fn apply_incremental_backup(
&self,
path: impl AsRef<Path>,
) -> Result<IncrementalImportReport, StorageError>
pub fn apply_incremental_backup( &self, path: impl AsRef<Path>, ) -> Result<IncrementalImportReport, StorageError>
Apply an incremental delta file produced by backup_incremental().
Reads the file, verifies its SHA-256 integrity, and performs a logical
import identical to import_incremental().
Sourcepub fn builder() -> Builder
pub fn builder() -> Builder
Convenience method for Builder::new
Sourcepub fn begin_write(&self) -> Result<WriteTransaction, TransactionError>
pub fn begin_write(&self) -> Result<WriteTransaction, TransactionError>
Begins a write transaction
Returns a WriteTransaction which may be used to read/write to the database. Only a single
write may be in progress at a time. If a write is in progress, this function will block
until it completes.
Sourcepub fn observer(&self) -> &Arc<dyn DatabaseObserver>
pub fn observer(&self) -> &Arc<dyn DatabaseObserver>
Returns the observer registered with this database.
Sourcepub fn begin_read_at(
&self,
transaction_id: u64,
) -> Result<ReadTransaction, TransactionError>
pub fn begin_read_at( &self, transaction_id: u64, ) -> Result<ReadTransaction, TransactionError>
Begin a read transaction at a specific historical transaction ID.
The database must have been opened with set_history_retention() > 0 and the
requested transaction must still be within the retention window.
Returns a ReadTransaction that sees the user-table state as of the
requested commit.
Sourcepub fn begin_read_at_time(
&self,
timestamp_ms: u64,
) -> Result<ReadTransaction, TransactionError>
pub fn begin_read_at_time( &self, timestamp_ms: u64, ) -> Result<ReadTransaction, TransactionError>
Begin a read transaction at the latest snapshot whose timestamp is <= the given epoch-millisecond value.
Requires the std feature (timestamps are only recorded with std).
Sourcepub fn transaction_history(
&self,
) -> Result<Vec<TransactionInfo>, TransactionError>
pub fn transaction_history( &self, ) -> Result<Vec<TransactionInfo>, TransactionError>
List all retained transaction snapshots.
Returns entries ordered by transaction ID (ascending).
Sourcepub fn submit_write_batch(
&self,
batch: WriteBatch,
) -> Result<(), GroupCommitError>
pub fn submit_write_batch( &self, batch: WriteBatch, ) -> Result<(), GroupCommitError>
Submit a write batch to the group commit pipeline.
Multiple concurrent callers will have their batches combined into a single write transaction with a single fsync, amortizing the durability cost across all participants.
The batch closure receives a &WriteTransaction and performs all desired
mutations (open tables, insert, remove, etc.). Do not call commit() or
abort() within the closure – the group committer manages the transaction
lifecycle.
If any batch in a group fails, the entire group is rolled back. The failed
batch receives its specific error; all others receive GroupCommitError::PeerFailed
and may retry.