Struct rocks::db::DB [] [src]

pub struct DB<'a> { /* fields omitted */ }

A DB is a persistent ordered map from keys to values.

A DB is safe for concurrent access from multiple threads without any external synchronization.

Examples

use rocks::rocksdb::*;

let db = DB::open(Options::default().map_db_options(|db| db.create_if_missing(true)),
                  "./data").unwrap();
// insert kv
let _ = db.put(&WriteOptions::default(), b"my-key", b"my-value").unwrap();

// get kv
let val = db.get(&ReadOptions::default(), b"my-key").unwrap();
println!("got value {}", String::from_utf8_lossy(&val));

assert_eq!(val, b"my-value");

Methods

impl<'a> DB<'a>
[src]

Open the database with the specified name.

Stores a pointer to a heap-allocated database in *dbptr and returns OK on success.

Stores nullptr in *dbptr and returns a non-OK status on error. Caller should delete *dbptr when it is no longer needed.

Open DB with column families.

db_options specify database specific options

column_families is the vector of all column families in the database, containing column family name and options. You need to open ALL column families in the database. To get the list of column families, you can use ListColumnFamilies(). Also, you can open only a subset of column families for read-only access.

The default column family name is 'default' and it's stored in rocksdb::kDefaultColumnFamilyName.

If everything is OK, handles will on return be the same size as column_families --- handles[i] will be a handle that you will use to operate on column family column_family[i].

Before delete DB, you have to close All column families by calling DestroyColumnFamilyHandle() with all the handles.

Open the database for read only. All DB interfaces that modify data, like put/delete, will return error. If the db is opened in read only mode, then no compactions will happen.

Not supported in ROCKSDB_LITE, in which case the function will return Status::NotSupported.

ListColumnFamilies will open the DB specified by argument name and return the list of all column nfamilies in that DB through column_families argument. The ordering of column families in column_families is unspecified.

Create a column_family and return the handle of column family through the argument handle.

Set the database entry for "key" to "value". If "key" already exists, it will be overwritten. Returns OK on success, and a non-OK status on error.

Note: consider setting options.sync = true.

Remove the database entry (if any) for "key". Returns OK on success, and a non-OK status on error. It is not an error if "key" did not exist in the database.

Note: consider setting options.sync = true.

Remove the database entry for "key". Requires that the key exists and was not overwritten. Returns OK on success, and a non-OK status on error. It is not an error if "key" did not exist in the database.

If a key is overwritten (by calling Put() multiple times), then the result of calling SingleDelete() on this key is undefined. SingleDelete() only behaves correctly if there has been only one Put() for this key since the previous call to SingleDelete() for this key.

This feature is currently an experimental performance optimization for a very specific workload. It is up to the caller to ensure that SingleDelete is only used for a key that is not deleted using Delete() or written using Merge(). Mixing SingleDelete operations with Deletes and Merges can result in undefined behavior.

Note: consider setting options.sync = true.

Removes the database entries in the range ["begin_key", "end_key"), i.e., including "begin_key" and excluding "end_key". Returns OK on success, and a non-OK status on error. It is not an error if no keys exist in the range ["begin_key", "end_key").

This feature is currently an experimental performance optimization for deleting very large ranges of contiguous keys. Invoking it many times or on small ranges may severely degrade read performance; in particular, the resulting performance can be worse than calling Delete() for each key in the range. Note also the degraded read performance affects keys outside the deleted ranges, and affects database operations involving scans, like flush and compaction.

Consider setting ReadOptions::ignore_range_deletions = true to speed up reads for key(s) that are known to be unaffected by range deletions.

Merge the database entry for "key" with "value". Returns OK on success, and a non-OK status on error. The semantics of this operation is determined by the user provided merge_operator when opening DB.

Note: consider setting options.sync = true.

Apply the specified updates to the database.

If updates contains no update, WAL will still be synced if options.sync=true.

Returns OK on success, non-OK on failure.

Note: consider setting options.sync = true.

If the database contains an entry for "key" store the corresponding value in *value and return OK.

If there is no entry for "key" leave *value unchanged and return a status for which Status::IsNotFound() returns true.

May return some other Status on an error.

If keys[i] does not exist in the database, then the i'th returned status will be one for which Status::IsNotFound() is true, and (*values)[i] will be set to some arbitrary value (often ""). Otherwise, the i'th returned status will have Status::ok() true, and (*values)[i] will store the value associated with keys[i].

(*values) will always be resized to be the same size as (keys). Similarly, the number of returned statuses will be the number of keys.

Note: keys will not be "de-duplicated". Duplicate keys will return duplicate values in order.

If the key definitely does not exist in the database, then this method returns false, else true. If the caller wants to obtain value when the key is found in memory, a bool for 'value_found' must be passed. 'value_found' will be true on return if value has been set properly.

This check is potentially lighter-weight than invoking DB::Get(). One way to make this lighter weight is to avoid doing any IOs.

Default implementation here returns true and sets 'value_found' to false

Return a heap-allocated iterator over the contents of the database. The result of NewIterator() is initially invalid (caller must call one of the Seek methods on the iterator before using it).

Caller should delete the iterator when it is no longer needed. The returned iterator should be deleted before this db is deleted.

Return a handle to the current DB state. Iterators created with this handle will all observe a stable snapshot of the current DB state. The caller must call ReleaseSnapshot(result) when the snapshot is no longer needed.

nullptr will be returned if the DB fails to take a snapshot or does not support snapshot.

Release a previously acquired snapshot. The caller must not use "snapshot" after this call.

DB implementations can export properties about their state via this method. If "property" is a valid property understood by this DB implementation (see Properties struct above for valid options), fills "*value" with its current value and returns true. Otherwise, returns false.

Similar to GetProperty(), but only works for a subset of properties whose return value is an integer. Return the value by integer. Supported properties:

  • "rocksdb.num-immutable-mem-table"
  • "rocksdb.mem-table-flush-pending"
  • "rocksdb.compaction-pending"
  • "rocksdb.background-errors"
  • "rocksdb.cur-size-active-mem-table"
  • "rocksdb.cur-size-all-mem-tables"
  • "rocksdb.size-all-mem-tables"
  • "rocksdb.num-entries-active-mem-table"
  • "rocksdb.num-entries-imm-mem-tables"
  • "rocksdb.num-deletes-active-mem-table"
  • "rocksdb.num-deletes-imm-mem-tables"
  • "rocksdb.estimate-num-keys"
  • "rocksdb.estimate-table-readers-mem"
  • "rocksdb.is-file-deletions-enabled"
  • "rocksdb.num-snapshots"
  • "rocksdb.oldest-snapshot-time"
  • "rocksdb.num-live-versions"
  • "rocksdb.current-super-version-number"
  • "rocksdb.estimate-live-data-size"
  • "rocksdb.min-log-number-to-keep"
  • "rocksdb.total-sst-files-size"
  • "rocksdb.base-level"
  • "rocksdb.estimate-pending-compaction-bytes"
  • "rocksdb.num-running-compactions"
  • "rocksdb.num-running-flushes"
  • "rocksdb.actual-delayed-write-rate"
  • "rocksdb.is-write-stopped"

Same as GetIntProperty(), but this one returns the aggregated int property from all column families.

For each i in [0,n-1], store in "sizes[i]", the approximate file system space used by keys in "[range[i].start .. range[i].limit)".

Note that the returned sizes measure file system space usage, so if the user data compresses by a factor of ten, the returned sizes will be one-tenth the size of the corresponding user data size.

If include_flags defines whether the returned size should include the recently written data in the mem-tables (if the mem-table type supports it), data serialized to disk, or both. include_flags should be of type DB::SizeApproximationFlags

The method is similar to GetApproximateSizes, except it returns approximate number of records in memtables.

Compact the underlying storage for the key range [*begin,*end]. The actual compaction interval might be superset of [*begin, *end]. In particular, deleted and overwritten versions are discarded, and the data is rearranged to reduce the cost of operations needed to access the data. This operation should typically only be invoked by users who understand the underlying implementation.

begin==nullptr is treated as a key before all keys in the database. end==nullptr is treated as a key after all keys in the database. Therefore the following call will compact the entire database:

db->CompactRange(options, nullptr, nullptr);

Note that after the entire database is compacted, all data are pushed down to the last level containing any data. If the total data size after compaction is reduced, that level might not be appropriate for hosting all the files. In this case, client could set options.change_level to true, to move the files back to the minimum level capable of holding the data set or a given level (specified by non-negative options.target_level).

This function will wait until all currently running background processes finish. After it returns, no background process will be run until UnblockBackgroundWork is called

This function will enable automatic compactions for the given column families if they were previously disabled. The function will first set the disable_auto_compactions option for each column family to 'false', after which it will schedule a flush/compaction.

NOTE: Setting disable_auto_compactions to 'false' through SetOptions() API does NOT schedule a flush/compaction afterwards, and only changes the parameter itself within the column family option.

Number of levels used for this DB.

Maximum level to which a new compacted memtable is pushed if it does not create overlap.

Number of files in level-0 that would stop writes.

Get DB name -- the exact same name that was provided as an argument to DB::Open()

Flush all mem-table data.

Sync the wal. Note that Write() followed by SyncWAL() is not exactly the same as Write() with sync=true: in the latter case the changes won't be visible until the sync is done.

Currently only works if allow_mmap_writes = false in Options.

The sequence number of the most recent transaction.

Prevent file deletions. Compactions will continue to occur, but no obsolete files will be deleted. Calling this multiple times have the same effect as calling it once.

Allow compactions to delete obsolete files.

If force == true, the call to EnableFileDeletions() will guarantee that file deletions are enabled after the call, even if DisableFileDeletions() was called multiple times before.

If force == false, EnableFileDeletions will only enable file deletion after it's been called at least as many times as DisableFileDeletions(), enabling the two methods to be called by two threads concurrently without synchronization -- i.e., file deletions will be enabled only after both threads call EnableFileDeletions()

GetLiveFiles followed by GetSortedWalFiles can generate a lossless backup

Retrieve the list of all files in the database. The files are relative to the dbname and are not absolute paths. The valid size of the manifest file is returned in manifest_file_size. The manifest file is an ever growing file, but only the portion specified by manifest_file_size is valid for this snapshot. Setting flush_memtable to true does Flush before recording the live files. Setting flush_memtable to false is useful when we don't want to wait for flush which may have to wait for compaction to complete taking an indeterminate time.

In case you have multiple column families, even if flush_memtable is true, you still need to call GetSortedWalFiles after GetLiveFiles to compensate for new data that arrived to already-flushed column families while other column families were flushing

Delete the file name from the db directory and update the internal state to reflect that. Supports deletion of sst and log files only. 'name' must be path relative to the db directory. eg. 000001.sst, /archive/000003.log

Returns a list of all table files with their level, start key and end key

Obtains the meta data of the specified column family of the DB. Status::NotFound() will be returned if the current DB does not have any column family match the specified name.

If cf_name is not pspecified, then the metadata of the default column family will be returned.

IngestExternalFile() will load a list of external SST files (1) into the DB We will try to find the lowest possible level that the file can fit in, and ingest the file into this level (2). A file that have a key range that overlap with the memtable key range will require us to Flush the memtable first before ingesting the file.

  • External SST files can be created using SstFileWriter
  • We will try to ingest the files to the lowest possible level even if the file compression dont match the level compression

Sets the globally unique ID created at database creation time by invoking Env::GenerateUniqueId(), in identity. Returns Status::OK if identity could be set properly

Returns default column family handle

Trait Implementations

impl<'a> Debug for DB<'a>
[src]

Formats the value using the given formatter.

impl<'a> Sync for DB<'a>
[src]