[][src]Struct rocks::options::DBOptions

pub struct DBOptions { /* fields omitted */ }

Options for the DB

Implementations

impl DBOptions[src]

pub fn from_options(opt: &Options) -> DBOptions[src]

pub fn increase_parallelism(self, total_threads: i32) -> Self[src]

By default, RocksDB uses only one background thread for flush and compaction. Calling this function will set it up such that total of total_threads is used. Good value for total_threads is the number of cores. You almost definitely want to call this function if your system is bottlenecked by RocksDB.

Default: 16

pub fn create_if_missing(self, val: bool) -> Self[src]

If true, the database will be created if it is missing.

Default: false

pub fn create_missing_column_families(self, val: bool) -> Self[src]

If true, missing column families will be automatically created.

Default: false

pub fn error_if_exists(self, val: bool) -> Self[src]

If true, an error is raised if the database already exists.

Default: false

pub fn paranoid_checks(self, val: bool) -> Self[src]

If true, RocksDB will aggressively check consistency of the data. Also, if any of the writes to the database fails (Put, Delete, Merge, Write), the database will switch to read-only mode and fail all other Write operations.

In most cases you want this to be set to true.

Default: true

pub fn env(self, env: &'static Env) -> Self[src]

Use the specified object to interact with the environment, e.g. to read/write files, schedule background work, etc.

Default: Env::Default()

pub fn rate_limiter(self, val: Option<RateLimiter>) -> Self[src]

Use to control write rate of flush and compaction. Flush has higher priority than compaction. Rate limiting is disabled if nullptr. If rate limiter is enabled, bytes_per_sync is set to 1MB by default.

Default: nullptr

pub fn sst_file_manager(self, val: Option<SstFileManager>) -> Self[src]

Use to track SST files and control their file deletion rate.

Features:

  • Throttle the deletion rate of the SST files.
  • Keep track the total size of all SST files.
  • Set a maximum allowed space limit for SST files that when reached the DB wont do any further flushes or compactions and will set the background error.
  • Can be shared between multiple dbs.

Limitations:

  • Only track and throttle deletes of SST files in first db_path (db_name if db_paths is empty).

Default: nullptr

pub fn info_log(self, val: Option<Logger>) -> Self[src]

Any internal progress/error information generated by the db will be written to info_log if it is non-nullptr, or to a file stored in the same directory as the DB contents if info_log is nullptr.

Default: nullptr

pub fn info_log_level(self, val: InfoLogLevel) -> Self[src]

pub fn max_open_files(self, val: i32) -> Self[src]

Number of open files that can be used by the DB. You may need to increase this if your database has a large working set. Value -1 means files opened are always kept open. You can estimate number of files based on target_file_size_base and target_file_size_multiplier for level-based compaction. For universal-style compaction, you can usually set it to -1.

Default: -1

pub fn max_file_opening_threads(self, val: i32) -> Self[src]

If max_open_files is -1, DB will open all files on DB::Open(). You can use this option to increase the number of threads used to open the files.

Default: 16

pub fn max_total_wal_size(self, val: u64) -> Self[src]

Once write-ahead logs exceed this size, we will start forcing the flush of column families whose memtables are backed by the oldest live WAL file (i.e. the ones that are causing all the space amplification). If set to 0 (default), we will dynamically choose the WAL size limit to be [sum of all write_buffer_size * max_write_buffer_number] * 4

Default: 0

pub fn statistics(self, val: Option<Statistics>) -> Self[src]

If non-null, then we should collect metrics about database operations

pub fn use_fsync(self, val: bool) -> Self[src]

If true, then every store to stable storage will issue a fsync. If false, then every store to stable storage will issue a fdatasync. This parameter should be set to true while storing data to filesystem like ext3 that can lose files after a reboot.

Default: false

Note: on many platforms fdatasync is defined as fsync, so this parameter would make no difference. Refer to fdatasync definition in this code base.

pub fn db_paths<P: Into<DbPath>, T: IntoIterator<Item = P>>(
    self,
    val: T
) -> Self
[src]

A list of paths where SST files can be put into, with its target size. Newer data is placed into paths specified earlier in the vector while older data gradually moves to paths specified later in the vector.

For example, you have a flash device with 10GB allocated for the DB, as well as a hard drive of 2TB, you should config it to be:

[{"/flash_path", 10GB}, {"/hard_drive", 2TB}]

The system will try to guarantee data under each path is close to but not larger than the target size. But current and future file sizes used by determining where to place a file are based on best-effort estimation, which means there is a chance that the actual size under the directory is slightly more than target size under some workloads. User should give some buffer room for those cases.

If none of the paths has sufficient room to place a file, the file will be placed to the last path anyway, despite to the target size.

Placing newer data to earlier paths is also best-efforts. User should expect user files to be placed in higher levels in some extreme cases.

If left empty, only one path will be used, which is db_name passed when opening the DB.

Default: empty

pub fn db_log_dir<P: AsRef<Path>>(self, path: P) -> Self[src]

This specifies the info LOG dir.

If it is empty, the log files will be in the same dir as data.

If it is non empty, the log files will be in the specified dir, and the db data dir's absolute path will be used as the log file name's prefix.

pub fn wal_dir<P: AsRef<Path>>(self, path: P) -> Self[src]

This specifies the absolute dir path for write-ahead logs (WAL).

If it is empty, the log files will be in the same dir as data, dbname is used as the data dir by default

If it is non empty, the log files will be in kept the specified dir.

When destroying the db, all log files in wal_dir and the dir itself is deleted

pub fn delete_obsolete_files_period_micros(self, val: u64) -> Self[src]

The periodicity when obsolete files get deleted. The default value is 6 hours. The files that get out of scope by compaction process will still get automatically delete on every compaction, regardless of this setting

pub fn max_background_jobs(self, val: i32) -> Self[src]

Maximum number of concurrent background jobs (compactions and flushes).

Default: 2

pub fn max_subcompactions(self, val: u32) -> Self[src]

This value represents the maximum number of threads that will concurrently perform a compaction job by breaking it into multiple, smaller ones that are run simultaneously.

Default: 1 (i.e. no subcompactions)

pub fn max_log_file_size(self, val: usize) -> Self[src]

Specify the maximal size of the info log file. If the log file is larger than max_log_file_size, a new info log file will be created.

If max_log_file_size == 0, all logs will be written to one log file.

pub fn log_file_time_to_roll(self, val: usize) -> Self[src]

Time for the info log file to roll (in seconds). If specified with non-zero value, log file will be rolled if it has been active longer than log_file_time_to_roll.

Default: 0 (disabled)

pub fn keep_log_file_num(self, val: usize) -> Self[src]

Maximal info log files to be kept.

Default: 1000

pub fn recycle_log_file_num(self, val: usize) -> Self[src]

Recycle log files.

If non-zero, we will reuse previously written log files for new logs, overwriting the old data. The value indicates how many such files we will keep around at any point in time for later use. This is more efficient because the blocks are already allocated and fdatasync does not need to update the inode after each write.

Default: 0

pub fn max_manifest_file_size(self, val: u64) -> Self[src]

manifest file is rolled over on reaching this limit.

The older manifest file be deleted.

The default value is MAX_INT so that roll-over does not take place.

pub fn table_cache_numshardbits(self, val: i32) -> Self[src]

Number of shards used for table cache.

pub fn wal_ttl_seconds(self, val: u64) -> Self[src]

The following two fields affect how archived logs will be deleted.

  1. If both set to 0, logs will be deleted asap and will not get into the archive.
  2. If WAL_ttl_seconds is 0 and WAL_size_limit_MB is not 0, WAL files will be checked every 10 min and if total size is greater then WAL_size_limit_MB, they will be deleted starting with the earliest until size_limit is met. All empty files will be deleted.
  3. If WAL_ttl_seconds is not 0 and WAL_size_limit_MB is 0, then WAL files will be checked every WAL_ttl_secondsi / 2 and those that are older than WAL_ttl_seconds will be deleted.
  4. If both are not 0, WAL files will be checked every 10 min and both checks will be performed with ttl being first.

pub fn wal_size_limit_mb(self, val: u64) -> Self[src]

pub fn manifest_preallocation_size(self, val: usize) -> Self[src]

Number of bytes to preallocate (via fallocate) the manifest files. Default is 4mb, which is reasonable to reduce random IO as well as prevent overallocation for mounts that preallocate large amounts of data (such as xfs's allocsize option).

pub fn allow_mmap_reads(self, val: bool) -> Self[src]

Allow the OS to mmap file for reading sst tables. Default: false

pub fn allow_mmap_writes(self, val: bool) -> Self[src]

Allow the OS to mmap file for writing.

DB::SyncWAL() only works if this is set to false.

Default: false

pub fn use_direct_reads(self, val: bool) -> Self[src]

Enable direct I/O mode for read/write they may or may not improve performance depending on the use case

Files will be opened in "direct I/O" mode which means that data r/w from the disk will not be cached or bufferized. The hardware buffer of the devices may however still be used. Memory mapped files are not impacted by these parameters.

Use O_DIRECT for user reads

Default: false

Not supported in ROCKSDB_LITE mode!

pub fn use_direct_io_for_flush_and_compaction(self, val: bool) -> Self[src]

Use O_DIRECT for both reads and writes in background flush and compactions When true, we also force new_table_reader_for_compaction_inputs to true.

Default: false

pub fn allow_fallocate(self, val: bool) -> Self[src]

If false, fallocate() calls are bypassed

pub fn is_fd_close_on_exec(self, val: bool) -> Self[src]

Disable child process inherit open files.

Default: true

pub fn stats_dump_period_sec(self, val: u32) -> Self[src]

if not zero, dump rocksdb.stats to LOG every stats_dump_period_sec

Default: 600 (10 min)

pub fn advise_random_on_open(self, val: bool) -> Self[src]

If set true, will hint the underlying file system that the file access pattern is random, when a sst file is opened.

Default: true

pub fn db_write_buffer_size(self, val: usize) -> Self[src]

Amount of data to build up in memtables across all column families before writing to disk.

This is distinct from write_buffer_size, which enforces a limit for a single memtable.

This feature is disabled by default. Specify a non-zero value to enable it.

Default: 0 (disabled)

pub fn write_buffer_manager(self, val: &WriteBufferManager) -> Self[src]

The memory usage of memtable will report to this object. The same object can be passed into multiple DBs and it will track the sum of size of all the DBs. If the total size of all live memtables of all the DBs exceeds a limit, a flush will be triggered in the next DB to which the next write is issued.

If the object is only passed to on DB, the behavior is the same as db_write_buffer_size. When write_buffer_manager is set, the value set will override db_write_buffer_size.

This feature is disabled by default. Specify a non-zero value to enable it.

Default: null

pub fn access_hint_on_compaction_start(self, val: AccessHint) -> Self[src]

Specify the file access pattern once a compaction is started. It will be applied to all input files of a compaction.

Default: NORMAL

pub fn new_table_reader_for_compaction_inputs(self, val: bool) -> Self[src]

If true, always create a new file descriptor and new table reader for compaction inputs. Turn this parameter on may introduce extra memory usage in the table reader, if it allocates extra memory for indexes. This will allow file descriptor prefetch options to be set for compaction input files and not to impact file descriptors for the same file used by user queries.

Suggest to enable BlockBasedTableOptions.cache_index_and_filter_blocks for this mode if using block-based table.

Default: false

pub fn compaction_readahead_size(self, val: usize) -> Self[src]

If non-zero, we perform bigger reads when doing compaction. If you're running RocksDB on spinning disks, you should set this to at least 2MB. That way RocksDB's compaction is doing sequential instead of random reads.

When non-zero, we also force new_table_reader_for_compaction_inputs to true.

Default: 0

pub fn random_access_max_buffer_size(self, val: usize) -> Self[src]

This is a maximum buffer size that is used by WinMmapReadableFile in unbuffered disk I/O mode. We need to maintain an aligned buffer for reads. We allow the buffer to grow until the specified value and then for bigger requests allocate one shot buffers. In unbuffered mode we always bypass read-ahead buffer at ReadaheadRandomAccessFile When read-ahead is required we then make use of compaction_readahead_size value and always try to read ahead. With read-ahead we always pre-allocate buffer to the size instead of growing it up to a limit.

This option is currently honored only on Windows

Default: 1 Mb

Special value: 0 - means do not maintain per instance buffer. Allocate per request buffer and avoid locking.

pub fn writable_file_max_buffer_size(self, val: usize) -> Self[src]

This is the maximum buffer size that is used by WritableFileWriter. On Windows, we need to maintain an aligned buffer for writes. We allow the buffer to grow until it's size hits the limit in buffered IO and fix the buffer size when using direct IO to ensure alignment of write requests if the logical sector size is unusual

Default: 1024 * 1024 (1 MB)

pub fn use_adaptive_mutex(self, val: bool) -> Self[src]

Use adaptive mutex, which spins in the user space before resorting to kernel. This could reduce context switch when the mutex is not heavily contended. However, if the mutex is hot, we could end up wasting spin time.

Default: false

pub fn bytes_per_sync(self, val: u64) -> Self[src]

Allows OS to incrementally sync files to disk while they are being written, asynchronously, in the background. This operation can be used to smooth out write I/Os over time. Users shouldn't rely on it for persistency guarantee. Issue one request for every bytes_per_sync written. 0 turns it off. Default: 0

You may consider using rate_limiter to regulate write rate to device. When rate limiter is enabled, it automatically enables bytes_per_sync to 1MB.

This option applies to table files

pub fn wal_bytes_per_sync(self, val: u64) -> Self[src]

Same as bytes_per_sync, but applies to WAL files

Default: 0, turned off

pub fn add_listener<T: EventListener>(self, val: T) -> Self[src]

A vector of EventListeners which call-back functions will be called when specific RocksDB event happens.

pub fn enable_thread_tracking(self, val: bool) -> Self[src]

If true, then the status of the threads involved in this DB will be tracked and available via GetThreadList() API.

Default: false

pub fn delayed_write_rate(self, val: u64) -> Self[src]

The limited write rate to DB if soft_pending_compaction_bytes_limit or level0_slowdown_writes_trigger is triggered, or we are writing to the last mem table allowed and we allow more than 3 mem tables. It is calculated using size of user write requests before compression. RocksDB may decide to slow down more if the compaction still gets behind further.

Unit: byte per second.

Default: 16MB/s

pub fn allow_concurrent_memtable_write(self, val: bool) -> Self[src]

If true, allow multi-writers to update mem tables in parallel. Only some memtable_factory-s support concurrent writes; currently it is implemented only for SkipListFactory. Concurrent memtable writes are not compatible with inplace_update_support or filter_deletes. It is strongly recommended to set enable_write_thread_adaptive_yield if you are going to use this feature.

Default: true

pub fn enable_write_thread_adaptive_yield(self, val: bool) -> Self[src]

If true, threads synchronizing with the write batch group leader will wait for up to write_thread_max_yield_usec before blocking on a mutex. This can substantially improve throughput for concurrent workloads, regardless of whether allow_concurrent_memtable_write is enabled.

Default: true

pub fn write_thread_max_yield_usec(self, val: u64) -> Self[src]

The maximum number of microseconds that a write operation will use a yielding spin loop to coordinate with other write threads before blocking on a mutex. (Assuming write_thread_slow_yield_usec is set properly) increasing this value is likely to increase RocksDB throughput at the expense of increased CPU usage.

Default: 100

pub fn write_thread_slow_yield_usec(self, val: u64) -> Self[src]

The latency in microseconds after which a std::this_thread::yield call (sched_yield on Linux) is considered to be a signal that other processes or threads would like to use the current core. Increasing this makes writer threads more likely to take CPU by spinning, which will show up as an increase in the number of involuntary context switches.

Default: 3

pub fn skip_stats_update_on_db_open(self, val: bool) -> Self[src]

If true, then DB::Open() will not update the statistics used to optimize compaction decision by loading table properties from many files. Turning off this feature will improve DBOpen time especially in disk environment.

Default: false

pub fn wal_recovery_mode(self, val: WALRecoveryMode) -> Self[src]

Recovery mode to control the consistency while replaying WAL

Default: PointInTimeRecovery

pub fn allow_2pc(self, val: bool) -> Self[src]

if set to false then recovery will fail when a prepared transaction is encountered in the WAL

pub fn row_cache(self, val: Option<Cache>) -> Self[src]

A global cache for table-level rows.

Default: nullptr (disabled)

Not supported in ROCKSDB_LITE mode!

Rust: will move in and use share_ptr

pub fn fail_if_options_file_error(self, val: bool) -> Self[src]

If true, then DB::Open / CreateColumnFamily / DropColumnFamily / SetOptions will fail if options file is not detected or properly persisted.

DEFAULT: false

pub fn dump_malloc_stats(self, val: bool) -> Self[src]

If true, then print malloc stats together with rocksdb.stats when printing to LOG.

DEFAULT: false

pub fn avoid_flush_during_recovery(self, val: bool) -> Self[src]

By default RocksDB replay WAL logs and flush them on DB open, which may create very small SST files. If this option is enabled, RocksDB will try to avoid (but not guarantee not to) flush during recovery. Also, existing WAL logs will be kept, so that if crash happened before flush, we still have logs to recover from.

DEFAULT: false

pub fn avoid_flush_during_shutdown(self, val: bool) -> Self[src]

By default RocksDB will flush all memtables on DB close if there are unpersisted data (i.e. with WAL disabled) The flush can be skip to speedup DB close. Unpersisted data WILL BE LOST.

DEFAULT: false

Dynamically changeable through SetDBOptions() API.

pub fn allow_ingest_behind(self, val: bool) -> Self[src]

Set this option to true during creation of database if you want to be able to ingest behind (call IngestExternalFile() skipping keys that already exist, rather than overwriting matching keys). Setting this option to true will affect 2 things:

  1. Disable some internal optimizations around SST file compression
  2. Reserve bottom-most level for ingested files only.
  3. Note that num_levels should be >= 3 if this option is turned on.

DEFAULT: false

Immutable.

pub fn manual_wal_flush(self, val: bool) -> Self[src]

If true WAL is not flushed automatically after each write. Instead it relies on manual invocation of FlushWAL to write the WAL buffer to its file.

Default: false

Trait Implementations

impl Debug for DBOptions[src]

impl Default for DBOptions[src]

impl Drop for DBOptions[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.