Checkpoint

Struct Checkpoint 

Source
pub struct Checkpoint<'db> { /* private fields */ }
Expand description

Database’s checkpoint object. Used to create checkpoints of the specified DB from time to time.

Implementations§

Source§

impl<'db> Checkpoint<'db>

Source

pub fn new<T: ThreadMode, I: DBInner>( db: &'db DBCommon<T, I>, ) -> Result<Self, Error>

Creates new checkpoint object for specific DB.

Does not actually produce checkpoints, call .create_checkpoint() method to produce a DB checkpoint.

Source

pub fn create_checkpoint<P: AsRef<Path>>(&self, path: P) -> Result<(), Error>

Creates a new physical RocksDB checkpoint in the directory specified by path.

A checkpoint is a consistent, read-only view of the database at a specific point in time. Internally, RocksDB creates a new MANIFEST and metadata files and hard-links the relevant SST files, making the checkpoint efficient to create and safe to keep for long-lived reads.

This method uses the default log_size_for_flush value (0), which instructs RocksDB to flush memtables as needed before creating the checkpoint. Forcing a flush ensures that the checkpoint includes the most recent writes that may still reside in memtables at the time of checkpoint creation.

Forcing a flush may create new SST file(s), including very small L0 SSTs if little data has been written since the last flush. Applications that create checkpoints frequently or during periods of low write volume may wish to control this behavior by using an API that allows specifying log_size_for_flush.

Note:

  • Checkpoints are always SST-based and never depend on WAL files or live memtables when opened.
  • If writes are performed with WAL disabled, forcing a flush is required to ensure those writes appear in the checkpoint.
  • When using RocksDB TransactionDB with two-phase commit (2PC), RocksDB will always flush regardless of the log_size_for_flush setting.
Source

pub fn create_checkpoint_with_log_size<P: AsRef<Path>>( &self, path: P, log_size_for_flush: u64, ) -> Result<(), Error>

Creates a new physical DB checkpoint in path, allowing the caller to control log_size_for_flush.

log_size_for_flush is forwarded to RocksDB’s Checkpoint API:

  • 0 forces a flush as needed before checkpoint creation, which helps the checkpoint include the latest writes; this may create new SST file(s).
  • A non-zero value:
    • Expected behavior (once RocksDB bug is fixed): Only forces a flush if the total WAL size exceeds the specified threshold. When a flush is not forced and WAL writing is enabled, RocksDB includes WAL files in the checkpoint that are replayed on open to reconstruct recent writes. This avoids creating small SST files during periods of low write volume, at the cost of additional checkpoint storage space for the copied WAL.
    • Current behavior (RocksDB bug): Never flushes, regardless of WAL size. The checkpoint will always include WAL files instead of flushing to SST. See: https://github.com/facebook/rocksdb/pull/14193

In practice, using a non-zero value means checkpoints may represent an older, fully materialized database state rather than the instantaneous state at the time the checkpoint is created.

Note:

  • If writes are performed with WAL disabled, using a non-zero log_size_for_flush may cause those writes to be absent from the checkpoint.
  • When using RocksDB TransactionDB with two-phase commit (2PC), RocksDB will always flush regardless of log_size_for_flush.
Source

pub fn export_column_family<P: AsRef<Path>>( &self, column_family: &impl AsColumnFamilyRef, path: P, ) -> Result<ExportImportFilesMetaData, Error>

Export a specified Column Family

Creates copies of the live SST files at the specified export path.

  • SST files will be created as hard links when the directory specified is in the same partition as the db directory, copied otherwise.
  • the path must not yet exist - a new directory will be created as part of the export.
  • Always triggers a flush.
§Examples
use rust_rocksdb::{DB, checkpoint::Checkpoint};

fn export_column_family(db: &DB, column_family_name: &str, export_path: &str) {
   let cp = Checkpoint::new(&db).unwrap();
   let cf = db.cf_handle(column_family_name).unwrap();

   let export_metadata = cp.export_column_family(&cf, export_path).unwrap();

   assert!(export_metadata.get_files().len() > 0);
}

See also: DB::create_column_family_with_import.

Trait Implementations§

Source§

impl Drop for Checkpoint<'_>

Source§

fn drop(&mut self)

Executes the destructor for this type. Read more

Auto Trait Implementations§

§

impl<'db> Freeze for Checkpoint<'db>

§

impl<'db> RefUnwindSafe for Checkpoint<'db>

§

impl<'db> !Send for Checkpoint<'db>

§

impl<'db> !Sync for Checkpoint<'db>

§

impl<'db> Unpin for Checkpoint<'db>

§

impl<'db> UnwindSafe for Checkpoint<'db>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.