[][src]Struct rocks::table::BlockBasedTableOptions

pub struct BlockBasedTableOptions { /* fields omitted */ }

For advanced user only

Implementations

impl BlockBasedTableOptions[src]

pub fn cache_index_and_filter_blocks(self, val: bool) -> Self[src]

TODO(kailiu) Temporarily disable this feature by making the default value to be false.

Indicating if we'd put index/filter blocks to the block cache.

If not specified, each "table reader" object will pre-load index/filter block during table initialization.

pub fn cache_index_and_filter_blocks_with_high_priority(self, val: bool) -> Self[src]

If cache_index_and_filter_blocks is enabled, cache index and filter blocks with high priority. If set to true, depending on implementation of block cache, index and filter blocks may be less likely to be evicted than data blocks.

pub fn pin_l0_filter_and_index_blocks_in_cache(self, val: bool) -> Self[src]

if cache_index_and_filter_blocks is true and the below is true, then filter and index blocks are stored in the cache, but a reference is held in the "table reader" object so the blocks are pinned and only evicted from cache when the table reader is freed.

pub fn index_type(self, val: IndexType) -> Self[src]

The index type that will be used for this table.

Default: BinarySearch

pub fn hash_index_allow_collision(self, val: bool) -> Self[src]

This option is now deprecated. No matter what value it is set to, it will behave as if hash_index_allow_collision=true.

pub fn no_block_cache(self, val: bool) -> Self[src]

Disable block cache. If this is set to true, then no block cache should be used, and the block_cache should point to a nullptr object.

pub fn block_cache(self, val: Option<Cache>) -> Self[src]

If non-NULL use the specified cache for blocks.

If NULL, rocksdb will automatically create and use an 8MB internal cache.

pub fn persistent_cache(self, val: Option<PersistentCache>) -> Self[src]

If non-NULL use the specified cache for pages read from device IF NULL, no page cache is used

pub fn block_cache_compressed(self, val: Option<Cache>) -> Self[src]

If non-NULL use the specified cache for compressed blocks.

If NULL, rocksdb will not use a compressed block cache.

pub fn block_size(self, val: usize) -> Self[src]

Approximate size of user data packed per block. Note that the block size specified here corresponds to uncompressed data. The actual size of the unit read from disk may be smaller if compression is enabled. This parameter can be changed dynamically.

pub fn block_size_deviation(self, val: i32) -> Self[src]

This is used to close a block before it reaches the configured 'block_size'. If the percentage of free space in the current block is less than this specified number and adding a new record to the block will exceed the configured block size, then this block will be closed and the new record will be written to the next block.

pub fn block_restart_interval(self, val: i32) -> Self[src]

Number of keys between restart points for delta encoding of keys. This parameter can be changed dynamically. Most clients should leave this parameter alone. The minimum value allowed is 1. Any smaller value will be silently overwritten with 1.

pub fn index_block_restart_interval(self, val: i32) -> Self[src]

Same as block_restart_interval but used for the index block.

pub fn metadata_block_size(self, val: u64) -> Self[src]

Block size for partitioned metadata. Currently applied to indexes when kTwoLevelIndexSearch is used and to filters when partition_filters is used.

Note: Since in the current implementation the filters and index partitions are aligned, an index/filter block is created when either index or filter block size reaches the specified limit.

Note: this limit is currently applied to only index blocks; a filter partition is cut right after an index block is cut

TODO(myabandeh): remove the note above when filter partitions are cut separately

pub fn partition_filters(self, val: bool) -> Self[src]

Note: currently this option requires kTwoLevelIndexSearch to be set as well.

TODO(myabandeh): remove the note above once the limitation is lifted

TODO(myabandeh): this feature is in experimental phase and shall not be used in production; either remove the feature or remove this comment if it is ready to be used in production.

Use partitioned full filters for each SST file

pub fn use_delta_encoding(self, val: bool) -> Self[src]

Use delta encoding to compress keys in blocks. ReadOptions::pin_data requires this option to be disabled.

Default: true

pub fn filter_policy(self, val: Option<FilterPolicy>) -> Self[src]

If non-nullptr, use the specified filter policy to reduce disk reads.

Many applications will benefit from passing the result of NewBloomFilterPolicy() here.

pub fn whole_key_filtering(self, val: bool) -> Self[src]

If true, place whole keys in the filter (not just prefixes). This must generally be true for gets to be efficient.

pub fn verify_compression(self, val: bool) -> Self[src]

Verify that decompressing the compressed block gives back the input. This is a verification mode that we use to detect bugs in compression algorithms.

pub fn read_amp_bytes_per_bit(self, val: u32) -> Self[src]

If used, For every data block we load into memory, we will create a bitmap of size ((block_size / read_amp_bytes_per_bit) / 8) bytes. This bitmap will be used to figure out the percentage we actually read of the blocks.

When this feature is used Tickers::READ_AMP_ESTIMATE_USEFUL_BYTES and Tickers::READ_AMP_TOTAL_READ_BYTES can be used to calculate the read amplification using this formula (READ_AMP_TOTAL_READ_BYTES / READ_AMP_ESTIMATE_USEFUL_BYTES)

value => memory usage (percentage of loaded blocks memory) 1 => 12.50 % 2 => 06.25 % 4 => 03.12 % 8 => 01.56 % 16 => 00.78 %

Note: This number must be a power of 2, if not it will be sanitized to be the next lowest power of 2, for example a value of 7 will be treated as 4, a value of 19 will be treated as 16.

Default: 0 (disabled)

pub fn format_version(self, val: u32) -> Self[src]

We currently have three versions:

0 -- This version is currently written out by all RocksDB's versions by default. Can be read by really old RocksDB's. Doesn't support changing checksum (default is CRC32).

1 -- Can be read by RocksDB's versions since 3.0. Supports non-default checksum, like xxHash. It is written by RocksDB when BlockBasedTableOptions::checksum is something other than kCRC32c. (version 0 is silently upconverted)

2 -- Can be read by RocksDB's versions since 3.10. Changes the way we encode compressed blocks with LZ4, BZip2 and Zlib compression. If you don't plan to run RocksDB before version 3.10, you should probably use this.

This option only affects newly written tables. When reading exising tables, the information about version is read from the footer.

Trait Implementations

impl Default for BlockBasedTableOptions[src]

impl Drop for BlockBasedTableOptions[src]

fn drop(&mut self)[src]

since underlying C++ use shared_ptr, ok to have rust free one.

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.