pub struct BlockDB {
pub path: Arc<PathBuf>,
pub options_store: Arc<OptionsStore>,
/* private fields */
}
Expand description
Fields§
§path: Arc<PathBuf>
§options_store: Arc<OptionsStore>
Implementations§
Source§impl BlockDB
impl BlockDB
Sourcepub async fn batch<Bytes: AsRef<[u8]>>(
&mut self,
writes: Vec<Bytes>,
frees: Vec<&BlockKey>,
) -> Result<BatchResult, Error>
pub async fn batch<Bytes: AsRef<[u8]>>( &mut self, writes: Vec<Bytes>, frees: Vec<&BlockKey>, ) -> Result<BatchResult, Error>
Performs a batch of writes followed by a batch of frees, combined into a single atomic operation.
Returns:
- A
Vec<BlockKey>
containing the keys for the newly writtenDataBlock
s, in the same order they were provided. - The total number of bytes freed by the batch delete.
- Atomic
- Corruptible
Example
let mut block_db = BlockDB::open("./data", None).await?;
let block_key = block_db.write(b"Shark").await?;
let BatchResult {
freed_bytes,
new_block_keys,
} = block_db
.batch(
vec![b"Hello", b"World"],
vec![&block_key]
).await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn byte_size(&self) -> ByteSize
pub async fn byte_size(&self) -> ByteSize
Obtain the ByteSize
of the entire BlockDB
.
Example
let block_db = BlockDB::open("./data", None).await?;
let block_key: BlockKey = block_db.write(b"123").await?;
let ByteSize {
used_bytes, // 4096 (Default chunk size)
free_bytes, // 0
} = block_db.byte_size().await;
Sourcepub async fn data_file_byte_size<S: AsRef<str>>(
&self,
data_file_id: S,
) -> Option<ByteSize>
pub async fn data_file_byte_size<S: AsRef<str>>( &self, data_file_id: S, ) -> Option<ByteSize>
Obtain the ByteSize
of a single DataFile
.
Example
let block_db = BlockDB::open("./data", None).await?;
let BlockKey { data_file_id, .. } = block_db.write(b"123").await?;
let ByteSize {
used_bytes, // 4096 (Default chunk size)
free_bytes, // 0
} = block_db.data_file_byte_size(data_file_id).await;
Sourcepub async fn data_file_byte_size_map(&self) -> HashMap<String, ByteSize>
pub async fn data_file_byte_size_map(&self) -> HashMap<String, ByteSize>
Obtain a mapping of a ByteSize
for each DataFile
Example
let block_db = BlockDB::open("./data", None).await?;
let BlockKey { data_file_id, .. } = block_db.write(b"123").await?;
// {
// "z-daZa": ByteSize {
// used_bytes: 4096, (Default chunk size)
// free_bytes: 0,
// },
// }
let byte_size_map = block_db.data_file_byte_size_map().await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn uncorrupt(&mut self, action: UncorruptAction) -> Result<(), Error>
pub async fn uncorrupt(&mut self, action: UncorruptAction) -> Result<(), Error>
Attempts to resolve a BlockDB
corruption state.
Used in response to an Error::Corrupted
, this method attempts to
automatically resolve the issue by performing recovery steps.
The only error this method can return is another Error::Corrupted
,
indicating that the corruption has not yet been resolved and recovery
should be retried. This allows multiple safe attempts to uncorrupt
the database.
If specific DataFile
s remain deadlocked — even after ensuring
filesystem and hardware stability — it likely indicates unrecoverable
corruption in the WAL or binary file contents.
Example
// ...
if let Err(err) = block_db
.batch(vec![b"Hello", b"World"], vec![])
.await
{
if let Error::Corrupted { action, .. } = err {
if let Err(Error::Corrupted { action, .. }) = block_db.uncorrupt(action).await {
// Store this action to process after ensuring stability
}
}
}
Source§impl BlockDB
impl BlockDB
Sourcepub async fn clear_data_file<S: AsRef<str>>(
&self,
data_file_id: S,
_: ConfirmDestructiveAction,
) -> Result<(), Error>
pub async fn clear_data_file<S: AsRef<str>>( &self, data_file_id: S, _: ConfirmDestructiveAction, ) -> Result<(), Error>
⚠️ CLEARS a DataFile
of all of its data. This action is irreversible.
Requires explicitly passing ConfirmDestructiveAction::IKnowWhatImDoing
to confirm intent.
Use with extreme caution.
- Atomic
- Corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
block_db.clear_data_file("3f-6hf", ConfirmDestructiveAction::IKnowWhatImDoing).await?;
Sourcepub async fn clear_data_files(
&self,
_: ConfirmDestructiveAction,
) -> Result<(), Error>
pub async fn clear_data_files( &self, _: ConfirmDestructiveAction, ) -> Result<(), Error>
⚠️ CLEARS all DataFile
s and all of their data. This action is irreversible.
Requires explicitly passing ConfirmDestructiveAction::IKnowWhatImDoing
to confirm intent.
Use with extreme caution.
- Atomic
- Corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
block_db.clear_data_files(ConfirmDestructiveAction::IKnowWhatImDoing).await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn compact_data_file<S: AsRef<str>>(
&self,
data_file_id: S,
) -> Result<(), Error>
pub async fn compact_data_file<S: AsRef<str>>( &self, data_file_id: S, ) -> Result<(), Error>
Compacts a DataFile
by removing all free chunks.
A temporary file is created and all non-free DataBlock
s are written
into it. The temporary file is then swapped in to replace the original,
effectively removing unused space and reducing the file size.
Note: This is a relatively heavy operation. It’s recommended to run it during initialization (warm-up) or before shutdown (clean-up), as freed chunks are already reused automatically during writes.
- Atomic
- Corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
block_db.compact_data_file("3f-6hf").await?;
Sourcepub async fn compact_data_files(&self) -> Result<(), Error>
pub async fn compact_data_files(&self) -> Result<(), Error>
Compacts each DataFile
by removing all free chunks.
For every DataFile
, a temporary file is created and all non-free
DataBlock
s are written into it. The temporary file is then swapped
in to replace the original, effectively removing unused space and
reducing the file size.
Note: This is a relatively heavy operation. It’s recommended to run it during initialization (warm-up) or before shutdown (clean-up), as freed chunks are already reused automatically during writes.
- Atomic
- Corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
block_db.compact_data_files().await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn delete(self, _: ConfirmDestructiveAction) -> Result<(), Error>
pub async fn delete(self, _: ConfirmDestructiveAction) -> Result<(), Error>
⚠️ DELETES the BlockDB
and all of its data. This action is irreversible.
Requires explicitly passing ConfirmDestructiveAction::IKnowWhatImDoing
to confirm intent.
Use with extreme caution. This will remove all DataFile
s, metadata,
WALs, and configuration associated with the database.
- Atomic
- Non-corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
block_db.delete(ConfirmDestructiveAction::IKnowWhatImDoing).await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn delete_data_file<S: AsRef<str>>(
&self,
data_file_id: S,
_: ConfirmDestructiveAction,
) -> Result<(), Error>
pub async fn delete_data_file<S: AsRef<str>>( &self, data_file_id: S, _: ConfirmDestructiveAction, ) -> Result<(), Error>
⚠️ DELETES a DataFile
and all of its data. This action is irreversible.
Requires explicitly passing ConfirmDestructiveAction::IKnowWhatImDoing
to confirm intent.
Use with extreme caution.
- Atomic
- Corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
block_db.delete_data_file("3f-6hf", ConfirmDestructiveAction::IKnowWhatImDoing).await?;
Sourcepub async fn delete_data_files(
&self,
_: ConfirmDestructiveAction,
) -> Result<(), Error>
pub async fn delete_data_files( &self, _: ConfirmDestructiveAction, ) -> Result<(), Error>
⚠️ DELETES all DataFile
s and all of their data. This action is irreversible.
Requires explicitly passing ConfirmDestructiveAction::IKnowWhatImDoing
to confirm intent.
Use with extreme caution.
- Atomic
- Corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
block_db.delete_data_files(ConfirmDestructiveAction::IKnowWhatImDoing).await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn free(&mut self, __arg1: &BlockKey) -> Result<usize, Error>
pub async fn free(&mut self, __arg1: &BlockKey) -> Result<usize, Error>
Frees the DataBlock
(data_block_id
) within DataFile
(data_file_id
)
by marking its chunks as “free”.
Once marked as free, the DataBlock
is unrecoverable, making this
operation functionally equivalent to a delete.
Freed space will be reused in future writes. To physically remove the
freed bytes from the underlying DataFile
, use BlockDB::compact_data_file(s)
.
Returns the total number of bytes freed.
- Atomic
- Non-corruptible
Example
let block_db = BlockDB::open("./data", None).await?;
let block_key = block_db.write(b"Hello World").await?;
let freed_bytes: usize = block_db.free(&block_key).await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn read(&self, __arg1: &BlockKey) -> Result<Option<Vec<u8>>, Error>
pub async fn read(&self, __arg1: &BlockKey) -> Result<Option<Vec<u8>>, Error>
Reads the bytes of a DataBlock
from a DataFile
.
Returns None
if either the data_file_id
or data_block_id
does not exist.
- Atomic
- Non-corruptible
Example
// BlockKey { data_file_id, data_block_id }
let block_key = block_db.write(b"Hello World").await?;
// b"Hello World".to_vec()
let data = block_db.read(&block_key).await?.unwrap();
Sourcepub async fn read_many(
&self,
keys: Vec<&BlockKey>,
) -> Result<HashMap<BlockKey, Option<Vec<u8>>>, Error>
pub async fn read_many( &self, keys: Vec<&BlockKey>, ) -> Result<HashMap<BlockKey, Option<Vec<u8>>>, Error>
Reads the bytes of multiple DataBlock
s in a single, consistent view.
- Atomic
- Non-corruptible
Example
let block_key_one = block_db.write(b"Hello").await?;
let block_key_two = block_db.write(b"World").await?;
// {
// BlockKey { .. } : Some(b"Hello"),
// BlockKey { .. } : Some(b"World"),
// }
let read_many_map = block_db.read_many(vec![&block_key_one, &block_key_two]).await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn recover(&mut self) -> Result<(), Error>
pub async fn recover(&mut self) -> Result<(), Error>
Recovers the state of the BlockDB
by replaying its WAL,
as well as the WALs for all DataFile
s. Invalid DataFile
s
and DataBlock
s are deleted during the process.
This method complements BlockDB::uncorrupt
, and is intended
for cases where the file structure has become corrupted through
external means — not via BlockDB
methods themselves (e.g.,
hardware issues or filesystem corruption).
As noted in uncorrupt
, you should ensure filesystem and hardware
stability before invoking this method, as such corruption usually
indicates a deeper issue with the environment.
If this method fails repeatedly, it likely means the WAL(s)
themselves are irreparably corrupted — in which case manual deletion
of the BlockDB
may be required.
- Atomic
- Non-corruptible
Example
block_db.recover().await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn recover_data_file<S: AsRef<str>>(
&self,
data_file_id: S,
) -> Result<(), Error>
pub async fn recover_data_file<S: AsRef<str>>( &self, data_file_id: S, ) -> Result<(), Error>
Recovers a specific DataFile
to a known good state.
During recovery, the DataFile
will replay its WAL,
truncate the binary file to the correct length,
and update in-memory state accordingly.
- Atomic
- Non-corruptible
Example
block_db.recover_data_file("3f-6hf").await?;
Sourcepub async fn recover_data_files(&self) -> Result<(), Error>
pub async fn recover_data_files(&self) -> Result<(), Error>
Recovers all DataFile
s to a known good state.
During recovery, each DataFile
will replay its WAL and truncate the
binary file to the correct length. This process restores the in-memory
state to match the persistent state on disk.
- Atomic
- Non-corruptible
Example
block_db.recover_data_files().await?;
Source§impl BlockDB
impl BlockDB
Sourcepub async fn write<B: AsRef<[u8]>>(&self, bytes: B) -> Result<BlockKey, Error>
pub async fn write<B: AsRef<[u8]>>(&self, bytes: B) -> Result<BlockKey, Error>
Write bytes into a DataFile
, creating a new DataBlock
.
Returns BlockKey { data_file_id, data_block_id }
.
See Write Distribution in the README for more information.
- Atomic
- Non-corruptible
Example
let BlockKey { data_file_id, data_block_id } = block_db.write(b"Hello World").await?;