pub struct Marble { /* private fields */ }Expand description
Garbage-collecting object store. A nice solution to back a pagecache, for people building their own databases.
ROWEX-style concurrency: readers rarely block on other readers or writers, but serializes writes to be friendlier for SSD GC. This means that writes should generally be performed by some background process whose job it is to clean logs etc…
Implementations
sourceimpl Marble
impl Marble
pub fn open<P: AsRef<Path>>(path: P) -> Result<Marble>
pub fn open_with_config(config: Config) -> Result<Marble>
sourcepub fn read(&self, pid: ObjectId) -> Result<Option<Vec<u8>>>
pub fn read(&self, pid: ObjectId) -> Result<Option<Vec<u8>>>
Read a object out of storage. If this object is
unknown or has been removed, returns Ok(None).
May be called concurrently with background calls to
maintenance and write_batch.
sourcepub fn file_statistics(&self) -> FileStats
pub fn file_statistics(&self) -> FileStats
Statistics about current files, intended to inform
decisions about when to call maintenance based on
desired write and space amplification
characteristics.
sourcepub fn stable_logical_sequence_number(&self) -> u64
pub fn stable_logical_sequence_number(&self) -> u64
A monotonic measure of logical progress that this
system has made. You can refer to this in logs and
other stores that feed into marble, so that after
recovering marble, you can avoid double-recovering
any mutations that were already persisted here. Note
that if you are concurrently calling write_batch
or maintenance, the stable logical sequence number
will increase, so you should only use this to fence
idempotent operations if used in a concurrent
setting.
sourcepub fn write_batch<I>(&self, write_batch: I) -> Result<()> where
I: IntoIterator<Item = (ObjectId, Option<Vec<u8>>)>,
pub fn write_batch<I>(&self, write_batch: I) -> Result<()> where
I: IntoIterator<Item = (ObjectId, Option<Vec<u8>>)>,
Write a batch of objects to disk. This function is crash-atomic but NOT runtime atomic. If you are concurrently serving reads, and require atomic batch semantics, you should serve reads out of an in-memory cache until this function returns. Creates at least one file per call. Performs several fsync calls per call. Ideally, you will heavily batch objects being written using a logger of some sort before calling this function occasionally in the background, then deleting corresponding logs after this function returns.
sourcepub fn maintenance(&self) -> Result<usize>
pub fn maintenance(&self) -> Result<usize>
Defragments backing storage files, blocking
concurrent calls to write_batch but not
blocking concurrent calls to read. Returns the
number of rewritten objects.
Auto Trait Implementations
impl RefUnwindSafe for Marble
impl Send for Marble
impl Sync for Marble
impl Unpin for Marble
impl UnwindSafe for Marble
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more