Crate bitcoin_leveldb

Source

Modules§

in_memory_env
iter_state
merging_iterator
port
posix_env
posix_lock_table
repairer
shared_state
singleton_env

Structs§

Arena
Block
BlockBuilder
| BlockBuilder generates blocks where keys are | prefix-compressed: | | When we store a key, we drop the prefix shared | with the previous string. This helps reduce | the space requirement significantly. | Furthermore, once every K keys, we do not apply | the prefix compression and store the entire | key. We call this a “restart point”. The tail | end of the block stores the offsets of all of | the restart points, and can be used to do | a binary search when looking for a particular | key. Values are stored as-is (without | compression) immediately following the | corresponding key. | | An entry for a particular key-value pair has the form: | shared_bytes: varint32 | unshared_bytes: varint32 | value_length: varint32 | key_delta: char[unshared_bytes] | value: char[value_length] | shared_bytes == 0 for restart points. | | The trailer of the block has the form: | restarts: uint32[num_restarts] | num_restarts: uint32 | | restarts[i] contains the offset within the | block of the ith restart point.
BlockContents
BlockHandle
| BlockHandle is a pointer to the extent | of a file that stores a data block or a | meta block. |
BlockIter

BloomFilterPolicy
BytewiseComparatorImpl

Cache
CacheHandle
| Opaque handle to an entry stored in the | cache. |
CacheRep
Constructor
| Helper class for tests to unify the interface | between BlockBuilder/TableBuilder | and | | Block/Table. |
CorruptionReporter
| Notified when log reader encounters | corruption. |
DBImpl

DBImplWriter
| Information kept for every waiting | writer |
DBIter
| Memtables and sstables that make the DB | representation contain (userkey,seq,type) => | uservalue entries. DBIter combines multiple | entries for the same userkey found in the DB | representation into a single entry while | accounting for sequence numbers, deletion | markers, overwrites, etc.
EmptyIterator

EnvWrapper
| An implementation of Env that forwards all | calls to another Env. | | May be useful to clients who wish to override | just part of the functionality of another Env.
ErrorEnv
| A wrapper that allows injection of errors. |
ExtendedRecordTypes
FileMetaData
FileState
FileStateBlocks
FileStateRefs
FilterBlockBuilder
| A FilterBlockBuilder is used to construct all | of the filters for a particular Table. It | generates a single string which is stored as | a special block in the Table. | | The sequence of calls to FilterBlockBuilder | must match the regexp: (StartBlock | AddKey*)* Finish
FilterBlockReader
Footer
| Footer encapsulates the fixed information | stored at the tail end of every table | file. |
HASH
HandleTable
| We provide our own simple hash table since it | removes a whole bunch of porting hacks and is | also faster than some of the built-in hash | table implementations in some of the | compiler/runtime combinations we have tested. | E.g., readrandom speeds up by ~5% over the g++ | 4.4.3’s builtin hashtable.
Histogram
InMemoryEnv

IterState
LRUCache
A single shard of sharded cache.
LRUCacheInner
LRUHandle
| An entry is a variable length heap-allocated | structure. Entries are kept in a circular | doubly linked list ordered by access | time. |
LevelDB
LevelDBCache
LevelDBComparator

LevelDBEnv

LevelDBFileLock
LevelDBFilterPolicy

LevelDBIterator
LevelDBIteratorCleanupNode
| Cleanup functions are stored in | a single-linked list. | | The list’s head node is inlined in the | iterator.
LevelDBIteratorInner
LevelDBIteratorWrapper
| A internal wrapper class with an interface | similar to Iterator that caches the valid() and | key() results for an underlying iterator. | | This can help avoid virtual function calls and | also gives better cache locality.
LevelDBLogger
LevelDBOptions
LevelDBRandomFile
LevelDBReadOptions
LevelDBSeqFile
LevelDBSnapshot
LevelDBWritableFile
LevelDBWriteBatch
LevelDBWriteOptions
Limiter
| Helper class to limit resource usage to avoid | exhaustion. | | Currently used to limit read-only file | descriptors and mmap file usage so that we do | not run out of file descriptors or virtual | memory, or run into kernel performance problems | for very large databases.
LogReader
LogWriter
MemTable
MemTableInserter

MemTableIterator

MemTableKeyComparator
MergingIterator
MutexLock
NoDestructor
| Wraps an instance whose destructor is never | called. | | This is intended for use with function-level | static variables.
NoOpLogger

Options
| Options to control the behavior of a | database (passed to DB::Open) |
PosixEnv

PosixFileLock
| Instances are thread-safe because | they are immutable. |
PosixLockTable
| Tracks the files locked by | PosixEnv::LockFile(). | | We maintain a separate set instead of relying | on fcntl(F_SETLK) because fcntl(F_SETLK) does | not provide any protection against multiple | uses from the same process. | | Instances are thread-safe because all member | data is guarded by a mutex.
PosixLogger
PosixMmapReadableFile
| Implements random read access in a file using | mmap(). | | Instances of this class are thread-safe, as | required by the RandomAccessFile API. Instances | are immutable and Read() only calls thread-safe | library functions.
PosixRandomAccessFile
| Implements random read access in a file using | pread(). | | Instances of this class are thread-safe, as | required by the RandomAccessFile API. Instances | are immutable and Read() only calls thread-safe | library functions.
PosixSequentialFile
| Implements sequential read access in a file | using read(). | | Instances of this class are thread-friendly but | not thread-safe, as required by the | SequentialFile API.
PosixWritableFile

Random
| A very simple random number generator. Not | especially good at generating truly random | bits, but good enough for our needs in this | package.
RandomAccessFileImpl

ReadOptions
| Options that control read operations |
Repairer
Saver
SequentialFileImpl

ShardedLRUCache
SharedState
State shared by all concurrent executions of the same benchmark.
SingletonEnv
| Wraps an Env instance whose destructor | is never created. | | Intended usage: | | ———– | @code | | using PlatformSingletonEnv = SingletonEnv; | c_void ConfigurePosixEnv(int param) { | PlatformSingletonEnv::AssertEnvNotInitialized(); | // set global configuration flags. | } | Env* Env::Default() { | static PlatformSingletonEnv default_env; | return default_env.env(); | } |
SkipList
SkipListIterator
| Iteration over the contents of a skip | list | | Intentionally copyable
SkipListNode
| Implementation details follow |
Slice
| Slice is a simple structure containing | a pointer into some external storage and | a size. The user of a Slice must ensure that | the slice is not used after the corresponding | external storage has been deallocated. | | Multiple threads can invoke const methods on | a Slice without external synchronization, but | if any of the threads may call a non-const | method, all threads accessing the same Slice | must use external synchronization.
SnapshotImpl
| Snapshots are kept in a doubly-linked list in | the DB. | | Each SnapshotImpl corresponds to a particular | sequence number.
SnapshotList

Stats

Status
StdoutPrinter
Table
| A Table is a sorted map from strings to | strings. Tables are immutable and persistent. | A Table may be safely accessed from multiple | threads without external synchronization.
TableAndFile
TableBuilder
| TableBuilder provides the interface used to | build a Table (an immutable and sorted map from | keys to values). | | Multiple threads can invoke const methods on | a TableBuilder without external | synchronization, but if any of the threads may | call a non-const method, all threads accessing | the same TableBuilder must use external | synchronization.
TableBuilderRep
TableCache
TableRep
TableTest
Test
TestArgs
Tester
| An instance of Tester is allocated to | hold temporary state during the execution | of an assertion. |
ThreadState
| Per-thread state for concurrent executions | of the same benchmark. |
TwoLevelIterator
VersionLevelFileNumIterator
| An internal iterator. For a given | version/level pair, yields information about | the files in the level. For a given entry, | key() is the largest key that occurs in the | file, and value() is an 16-byte value | containing the file number and file size, both | encoded using EncodeFixed64.
WritableFileImpl

WriteBatch
| WriteBatch::rep_ := | sequence: fixed64 | count: fixed32 | data: record[count] | record := | kTypeValue varstring varstring | | kTypeDeletion varstring | varstring := | len: varint32 | data: uint8[len]
WriteBatchInternal
| WriteBatchInternal provides static | methods for manipulating a WriteBatch | that we don’t want in the public WriteBatch | interface. |
WriteBatchItemPrinter
| Called on every item found in a WriteBatch. |
WriteOptions
| Options that control write operations |

Enums§

CompressionType
| DB contents are stored in a set of blocks, each | of which holds a sequence of key,value pairs. | Each block may be compressed before being | stored in a file. The following enum describes | which compression method (if any) is used to | compress a block.
DBIterDirection
| Which direction is the iterator currently | moving? | | (1) When moving forward, the internal | iterator is positioned at the exact entry | that yields this->key(), this->value() | | (2) When moving backwards, the internal | iterator is positioned just before all | entries whose user key == this->key().
FileType
LogRecordType
SaverState
| Callback from TableCache::Get() |
StatusCode
Tag
| Tag numbers for serialized VersionEdit. | These numbers are written to disk and | should not be changed. |
TestType

Constants§

BLOCK_HANDLE_MAX_ENCODED_LENGTH
| Maximum encoding length of a BlockHandle |
BLOCK_SIZE
BLOCK_TRAILER_SIZE
| 1-byte type + 32-bit crc |
BUCKET_LIMIT
BYTE_EXTENSION_TABLE
CRC32XOR
| CRCs are pre- and post- conditioned | by xoring with all ones. |
DEFAULT_MMAP_LIMIT
| Up to 4096 mmap regions for 64-bit binaries; | none for 32-bit. |
FILE_STATE_BLOCK_SIZE
FILTER_BASE
FILTER_BASE_LG
| Generate new filter every 2KB of data |
FOOTER_ENCODED_LENGTH
| Note: | | The serialization of a Footer will always | occupy exactly this many bytes. It consists | of two block handles and a magic number. |
HAVE_CRC32C
| Define to 1 if you have Google CRC32C. |
HAVE_FDATASYNC
| Define to 1 if you have a definition for | fdatasync() in <unistd.h>. |
HAVE_FULLFSYNC
| Define to 1 if you have a definition for | | F_FULLFSYNC in <fcntl.h>. |
HAVE_O_CLOEXEC
| Define to 1 if you have a definition for | | O_CLOEXEC in <fcntl.h>. |
HAVE_SNAPPY
| Define to 1 if you have Google Snappy. |
HEADER
| WriteBatch header has an 8-byte sequence | number followed by a 4-byte count. |
HISTOGRAM_NUM_BUCKETS
LEVELDB_DELETEFILE_UNDEFINED
LEVELDB_IS_BIG_ENDIAN
| Define to 1 if your processor stores | words with the most significant byte | first (like Motorola and SPARC, unlike | Intel and VAX). |
LOG_BLOCK_SIZE
LOG_HEADER_SIZE
Header is checksum (4 bytes), length (2 bytes), type (1 byte).
LOG_MAX_RECORD_TYPE
MAJOR_VERSION
MASK_DELTA
MINOR_VERSION
NUM_NON_TABLE_CACHE_FILES
NUM_SHARDS
NUM_SHARD_BITS
OPEN_BASE_FLAGS
STRIDE_EXTENSION_TABLE0
STRIDE_EXTENSION_TABLE1
STRIDE_EXTENSION_TABLE2
STRIDE_EXTENSION_TABLE3
SkipListMaxHeight
TABLE_MAGIC_NUMBER
| TableMagicNumber was picked by running | echo http://code.google.com/p/leveldb/ | sha1sum | and taking the leading 64 bits. |
WRITABLE_FILE_BUFFER_SIZE

Traits§

CacheErase
CacheInsert
CacheInterface
CacheLookup
CacheNewId
CachePrune
CacheRelease
CacheTotalCharge
CacheValue
CompactRange
Compare
ConstructorFinishImpl
ConstructorInterface
ConstructorNewIterator
CreateDir
CreateFilter
DB
| A DB is a persistent ordered map from keys to | values. | | A DB is safe for concurrent access from | multiple threads without any external | synchronization.
Delete
DeleteDir
DeleteFile
Env
| in the c++, we had the following annotated | empty Default impl: | | Return a default environment suitable for the | current operating system. Sophisticated | users may wish to provide their own Env | implementation instead of relying on this | default environment. | | The result of Default() belongs to leveldb | and must never be deleted.
FileExists
FileLock
| Identifies a locked file. |
FilterPolicy
FindShortSuccessor
FindShortestSeparator
Get
GetApproximateSizes
GetChildren
GetFileSize
GetName
GetProperty
GetSnapshot
GetTestDirectory
KeyMayMatch
LevelDBIteratorInterface
LevelDBIteratorStatus
LockFile
LogReaderReporter
| Interface for reporting errors. |
Logger
| An interface for writing log messages. |
Logv
Name
NewAppendableFile
NewIterator
NewLogger
NewRandomAccessFile
NewSequentialFile
NewWritableFile
Next
NowMicros
Prev
Put
RandomAccessFile
| A file abstraction for randomly reading | the contents of a file. |
RandomAccessFileRead
ReleaseSnapshot
RenameFile
Schedule
Seek
SeekToFirst
SeekToLast
SequentialFile
| A file abstraction for reading sequentially | through a file |
SequentialFileRead
SequentialFileSkip
SleepForMicroseconds
SliceComparator
| A Comparator object provides a total order | across slices that are used as keys in an | sstable or a database. A Comparator | implementation must be thread-safe since | leveldb may invoke its methods concurrently | from multiple threads.
Snapshot
| Abstract handle to particular state of a DB. | | A Snapshot is an immutable object and can | therefore be safely accessed from multiple | threads without any external synchronization.
StartThread
UnlockFile
Valid
WritableFile
| A file abstraction for sequential writing. The | implementation must provide buffering since | callers may append small fragments at a time to | the file.
WritableFileAppend
WritableFileClose
WritableFileFlush
WritableFileSync
WriteBatchDelete
WriteBatchHandler
WriteBatchPut

Functions§

add_boundary_inputs
| Extracts the largest file b1 from | |compaction_files| and then searches for a b2 | in |level_files| for which user_key(u1) | = user_key(l2). If it finds such a file b2 | (known as a boundary file) it adds it to | |compaction_files| and then searches again | using this new upper bound. | | If there are two blocks, b1=(l1, u1) and | b2=(l2, u2) and user_key(u1) = user_key(l2), | and if we compact b1 but not b2 then | a subsequent get operation will yield an | incorrect result because it will return the | record from b2 in level i rather than from b1 | because it searches level by level for records | matching the supplied user key. | | parameters: | | in level_files: List of files to | search for boundary files. | | in/out compaction_files: List of files to | extend by adding boundary files.
after_file
append_escaped_string_to
| Append a human-readable printout of | “value” to *str. | | Escapes any non-printable characters | found in “value”. |
append_number_to
| Append a human-readable printout of | “num” to *str |
append_with_space
before_file
benchdb_bench_main
benchdb_bench_sqlite3_main
benchdb_bench_tree_db_main
bloom_hash
build_table
| Build a Table file from the contents | of *iter. | | The generated file will be named according | to meta->number. | | On success, the rest of *meta will be | filled with metadata about the generated | table. | | If no data is present in *iter, meta->file_size | will be set to zero, and no Table file | will be produced. |
bytewise_comparator
cleanup_iterator_state
clip_to_range
| Fix user-supplied options to be reasonable |
compressible_string
| Store in dst a string of length “len” that | will compress to “Ncompressed_fraction” bytes | and return a Slice that references the | generated data.
consume_decimal_number
| Parse a human-readable number from “*in” into | *value. On success, advances “*in” past the | consumed number and sets “*val” to the numeric | value. Otherwise, returns false and leaves *in | in an unspecified state.
copy_string
crc32c_can_accelerate
| Determine if the CPU running this program | can accelerate the CRC32C calculation. |
crc32c_extend
| Return the crc32c of concat(A, data[0,n-1]) | where init_crc is the crc32c of some string A. | Extend() is often used to maintain the crc32c | of a stream of data.
crc32c_mask
| Return a masked representation of crc. | | Motivation: it is problematic to compute the | CRC of a string that contains embedded CRCs. | Therefore we recommend that CRCs stored | somewhere (e.g., in files) should be masked | before being stored.
crc32c_read_uint32le
| Reads a little-endian 32-bit integer | from a 32-bit-aligned buffer. |
crc32c_round_up
| Returns the smallest address >= the given | address that is aligned to N bytes. | | N must be a power of two.
crc32c_unmask
| Return the crc whose masked representation | is masked_crc. |
crc32c_value
| Return the crc32c of data[0,n-1] |
current_file_name
| Return the name of the current file. | This file contains the name of the current | manifest file. | | The result will be prefixed with “dbname”. |
dbleveldbutil_main
decode_entry
| Helper routine: decode the next block entry | starting at “p”, storing the number of shared | key bytes, non_shared key bytes, and the length | of the value in “*shared”, “*non_shared”, and | “*value_length”, respectively. Will not | dereference past “limit”. | | If any errors are detected, returns nullptr. | Otherwise, returns a pointer to the key delta | (just past the three decoded values).
decode_fixed32
| Lower-level versions of Get… that | read directly from a character buffer | without any bounds checking. |
decode_fixed64
delete_block
delete_cached_block
delete_entry
descriptor_file_name
| Return the name of the descriptor file for the | db named by “dbname” and the specified | incarnation number. The result will be | prefixed with “dbname”.
do_write_string_to_file
dump_descriptor
dump_log
dump_table
encode_fixed32
| Lower-level versions of Put… that write | directly into a character buffer | | REQUIRES: dst has enough space for the value | being written
encode_fixed64
encode_key
| Encode a suitable internal key target for | “target” and return it. | | Uses *scratch as scratch space, and the | returned pointer will point into this scratch | space.
encode_varint32
| Lower-level versions of Put… that write | directly into a character buffer and return | a pointer just past the last byte written. | | REQUIRES: dst has enough space for the value | being written
encode_varint64
error_check
escape_string
| Return a human-readable version of “value”. | | Escapes any non-printable characters found in | “value”.
exec_error_check
expanded_compaction_byte_size_limit
| Maximum number of bytes in all compacted files. | We avoid expanding the lower level file set of | a compaction if it would make the total | compaction cover more than this many bytes.
find_file
| Return the smallest index i such that | files[i]->largest >= key. | | Return files.size() if there is no such file. | | REQUIRES: “files” contains a sorted list of | non-overlapping files.
find_largest_key
| Finds the largest key in a vector of files. | Returns true if files it not empty. |
find_smallest_boundary_file
| Finds minimum file b2=(l2, u2) in level | file for which l2 > u1 and user_key(l2) | = user_key(u1) |
get_file_iterator
get_internal_key
get_length_prefixed_slice_with_limit
get_level
get_varint32
| Standard Get… routines parse a value | from the beginning of a Slice and advance | the slice past the parsed value. |
get_varint64
get_varint_32ptr
| Pointer-based variants of GetVarint… These | either store a value in *v and return a pointer | just past the parsed value, or return nullptr | on error. These routines only look at bytes in | the range [p..limit-1]
get_varint_32ptr_fallback
| Internal routine for use by fallback | path of GetVarint32Ptr |
get_varint_64ptr
guess_type
handle_dump_command
info_log_file_name
| Return the name of the info log file for | “dbname”. |
init_type_crc
leveldb_approximate_sizes
leveldb_cache_create_lru
leveldb_cache_destroy
leveldb_close
leveldb_compact_range
leveldb_comparator_create
leveldb_comparator_destroy
leveldb_create_default_env
leveldb_create_iterator
leveldb_create_snapshot
leveldb_delete
leveldb_destroy_db
leveldb_env_destroy
leveldb_env_get_test_directory
leveldb_filterpolicy_create
leveldb_filterpolicy_create_bloom
leveldb_filterpolicy_destroy
leveldb_free
leveldb_get
leveldb_iter_destroy
leveldb_iter_get_error
leveldb_iter_key
leveldb_iter_next
leveldb_iter_prev
leveldb_iter_seek
leveldb_iter_seek_to_first
leveldb_iter_seek_to_last
leveldb_iter_valid
leveldb_iter_value
leveldb_major_version
leveldb_minor_version
leveldb_open
leveldb_options_create
leveldb_options_destroy
leveldb_options_set_block_restart_interval
leveldb_options_set_block_size
leveldb_options_set_cache
leveldb_options_set_comparator
leveldb_options_set_compression
leveldb_options_set_create_if_missing
leveldb_options_set_env
leveldb_options_set_error_if_exists
leveldb_options_set_filter_policy
leveldb_options_set_info_log
leveldb_options_set_max_file_size
leveldb_options_set_max_open_files
leveldb_options_set_paranoid_checks
leveldb_options_set_write_buffer_size
leveldb_property_value
leveldb_put
leveldb_readoptions_create
leveldb_readoptions_destroy
leveldb_readoptions_set_fill_cache
leveldb_readoptions_set_snapshot
leveldb_readoptions_set_verify_checksums
leveldb_release_snapshot
leveldb_repair_db
leveldb_write
leveldb_writebatch_append
leveldb_writebatch_clear
leveldb_writebatch_create
leveldb_writebatch_delete
leveldb_writebatch_destroy
leveldb_writebatch_iterate
leveldb_writebatch_put
leveldb_writeoptions_create
leveldb_writeoptions_destroy
leveldb_writeoptions_set_sync
lock_file_name
| Return the name of the lock file for the | db named by “dbname”. The result will | be prefixed with “dbname”. |
lock_or_unlock
log
| Log the specified data to *info_log | if info_log is non-null. |
log_file_name
| Return the name of the log file with the | specified number in the db named by “dbname”. | | The result will be prefixed with “dbname”. |
make_file_name
max_bytes_for_level
max_file_size_for_level
max_grand_parent_overlap_bytes
| Maximum bytes of overlaps in grandparent | (i.e., level+2) before we stop building | a single file in a level->level+1 compaction. |
max_mmaps
| Return the maximum number of concurrent | mmaps. |
max_open_files
| Return the maximum number of read-only | files to keep open. |
new_bloom_filter_policy
| Return a new filter policy that uses a bloom | filter with approximately the specified number | of bits per key. A good value for bits_per_key | is 10, which yields a filter with ~ 1% false | positive rate. | | Callers must delete the result after any | database that is using the result has been | closed. | | Note: if you are using a custom comparator that | ignores some parts of the keys being compared, | you must not use NewBloomFilterPolicy() and | must provide your own FilterPolicy that also | ignores the corresponding parts of the keys. | For example, if the comparator ignores trailing | spaces, it would be incorrect to use | a FilterPolicy (like NewBloomFilterPolicy) that | does not ignore trailing spaces in keys.
new_db_iterator
| Return a new iterator that converts internal | keys (yielded by “*internal_iter”) that were | live at the specified “sequence” number into | appropriate user keys.
new_lru_cache
| Create a new cache with a fixed size capacity. | | This implementation of Cache uses a | least-recently-used eviction policy. |
new_mem_env
| Returns a new environment that stores its data | in memory and delegates all non-file-storage | tasks to base_env. The caller must delete the | result when it is no longer needed. *base_env | must remain live while the result is in use.
new_merging_iterator
| Return an iterator that provided the union of | the data in children[0,n-1]. Takes ownership | of the child iterators and will delete them | when the result iterator is deleted. | | The result does no duplicate suppression. | I.e., if a particular key is present in K child | iterators, it will be yielded K times. | | REQUIRES: n >= 0
new_two_level_iterator
| Return a new two level iterator. A two-level | iterator contains an index iterator whose | values point to a sequence of blocks where each | block is itself a sequence of key,value pairs. | The returned two-level iterator yields the | concatenation of all key/value pairs in the | sequence of blocks. Takes ownership of | “index_iter” and will delete it when no longer | needed. | | Uses a supplied function to convert an | index_iter value into an iterator over the | contents of the corresponding block.
newest_first
number_to_string
| Return a human-readable printout of | “num” |
old_info_log_file_name
| Return the name of the old info log file | for “dbname”. |
parse_file_name
| If filename is a leveldb file, store the type | of the file in *type. | | The number encoded in the filename is stored in | *number. If the filename was successfully | parsed, returns true. Else return false. |
posix_error
print_log_contents
| Print contents of a log file. (*func)() | is called on every record. |
put_fixed32
| Standard Put… routines append to | a string |
put_fixed64
put_length_prefixed_slice
put_varint32
put_varint64
random_key
| Return a random key with the specified | length that may contain interesting | characters (e.g. \x00, \xff, etc.). |
random_seed
| Return a randomization seed for this run. | Typically returns the same number on repeated | invocations of this binary, but automated runs | may be able to vary the seed.
random_string
| Store in *dst a random string of length | “len” and return a Slice that references | the generated data. |
read_block
| Read the block identified by “handle” | from “file”. On failure return non-OK. | On success fill *result and return OK. |
read_file_to_string
| A utility routine: read contents of | named file into *data |
register_test
| Register the specified test. Typically | not used directly, but invoked via the | macro expansion of TEST. |
release_block
run_all_tests
| Run some of the tests registered by the TEST() | macro. If the environment variable | “LEVELDB_TESTS” is not set, runs all tests. | | Otherwise, runs only the tests whose name | contains the value of “LEVELDB_TESTS” as | a substring. E.g., suppose the tests are: | | TEST(Foo, Hello) { … } | TEST(Foo, World) { … } | | LEVELDB_TESTS=Hello will run the first test | LEVELDB_TESTS=o will run both tests | LEVELDB_TESTS=Junk will run no tests | | Returns 0 if all tests pass. | | Dies or returns a non-zero value if some test | fails.
sanitize_options
| Sanitize db options. The caller should | delete result.info_log if it is not | equal to src.info_log. |
save_error
save_value
set_current_file
| Make the CURRENT file point to the descriptor | file with the specified number. |
snappy_compression_supported
some_file_overlaps_range
| Returns true iff some file in “files” overlaps | the user key range [*smallest,*largest]. | | smallest==nullptr represents a key smaller than | all keys in the DB. | | largest==nullptr represents a key largest than | all keys in the DB. | | REQUIRES: If disjoint_sorted_files, files[] | contains disjoint ranges in sorted | order.
sst_table_file_name
| Return the legacy file name for an sstable with | the specified number in the db named by | “dbname”. The result will be prefixed with | “dbname”.
step_error_check
table_cache_size
table_file_name
| Return the name of the sstable with the | specified number in the db named by “dbname”. | | The result will be prefixed with “dbname”. |
tabletable_test_main
target_file_size
temp_file_name
| Return the name of a temporary file owned by | the db named “dbname”. | | The result will be prefixed with “dbname”.
testhash_main
teststatus_test_main
tmp_dir
| Return the directory to use for temporary | storage. |
total_file_size
unref_entry
usage
varint_length
| Returns the length of the varint32 or | varint64 encoding of “v” |
version_edit_printer
| Called on every log record (each one | of which is a WriteBatch) found in a kDescriptorFile. |
wal_checkpoint
write_batch_printer
| Called on every log record (each one | of which is a WriteBatch) found in a kLogFile. |
write_string_to_file
| A utility routine: write “data” to the | named file. |
write_string_to_file_sync
| A utility routine: write “data” to the | named file and Sync() it. |

Type Aliases§

BlockFunction
LevelDBIteratorCleanupFunction
MemTableTable
PosixDefaultEnv