Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
bitcoinleveldb-lru
A faithful, low-level Rust port of LevelDB's sharded LRU cache implementation, as used inside the bitcoin-rs project. This crate exposes the internal cache primitives that back a LevelDB-compatible block cache, including sharded LRU caches, intrusive list nodes, and a custom hash table.
Overview
bitcoinleveldb-lru implements the exact cache semantics of LevelDB's C++ ShardedLRUCache, translated into Rust with careful attention to:
- Intrusive LRU lists implemented as circular doubly linked lists of
LRUHandlenodes. - Sharding across a fixed number of shards (
NUM_SHARDS) to reduce contention in multi-threaded workloads. - Custom handle table (
HandleTable) that acts as a compact open-hashing structure keyed by LevelDBSlicevalues. - Precise reference-counting semantics identical to LevelDB: an entry is freed when
refs == 0andin_cache == false. - Explicit manual memory management, using
libc::malloc/freeand flexible array members that inline the key bytes adjacent to theLRUHandleheader.
The design is intentionally low-level and unsafe in places, because it aims to be wire-compatible with LevelDB expectations and to interoperate with foreign code (e.g., C and C++) with minimal overhead.
The crate is not a general-purpose Rust cache with ergonomic types; instead, it is the internal engine for a LevelDB-style block cache used by higher-layer crates in the bitcoin-rs repository.
Core Data Structures
LRUHandle
LRUHandle represents a single cache entry. It is allocated on the heap with a flexible trailing key byte array:
Semantically:
key_datastoreskey_lengthkey bytes inline; this minimizes allocations.next/prevform a circular doubly linked list for the LRU and in-use lists.next_hashlinks handles within a hash bucket inHandleTable.chargeis the logical size/weight of the entry (e.g., bytes of a block), used to constrain total cache usage.refsis a reference count; the cache itself holds one reference while the entry is in the cache, and clients hold additional references while they are using the entry.
Important invariants (matching LevelDB):
- An entry is destroyed when
refs == 0andin_cache == false. - When the only remaining reference is held by the cache (
refs == 1 && in_cache == true), the entry resides on the LRU list. - When additional references exist, the entry is instead on the
in_uselist.
HandleTable
HandleTable is a custom hash table over *mut LRUHandle values, tuned for speed and to avoid C++-porting complications:
lengthis the number of buckets, always a power of two.elemscounts entries.listpoints to a C-style array oflengthbuckets, where each bucket is a linked list viaLRUHandle::next_hash.
The core operations:
insert(&mut self, h: *mut LRUHandle) -> *mut LRUHandle– insert handle, possibly replacing an existing one with the same key and hash.lookup(&mut self, key_: &Slice, hash_: u32) -> *mut LRUHandle– locate handle by key and precomputed hash.remove(&mut self, key_: &Slice, hash_: u32) -> *mut LRUHandle– detach handle from table without altering its refcount.
The table dynamically grows via resize when elems > length, rehashing all handles into a new bucket array. Allocation and deallocation are performed with libc::malloc/free.
LRUCacheInner
LRUCacheInner contains the core state of a non-sharded LRU cache:
usagetracks totalchargeof all entries currently in the cache.lruandin_useare sentinel nodes that serve as list heads for the LRU and in-use lists, respectively.tableindexes entries by(key, hash).
The constructor LRUCacheInner::new() creates sentinel nodes via LRUHandle::make_sentinel() and links them into empty circular lists.
LRUCache
LRUCache is a thread-safe, single-shard LRU cache, wrapping LRUCacheInner in a Mutex and a RefCell:
Key operations (per shard):
insert(&mut self, key_: &Slice, hash_: u32, value: *mut c_void, charge: usize, deleter: fn(&Slice, *mut c_void) -> c_void) -> *mut CacheHandlelookup(&mut self, key_: &Slice, hash_: u32) -> *mut CacheHandlerelease(&mut self, handle: *mut CacheHandle)erase(&mut self, key_: &Slice, hash_: u32)prune(&mut self)– aggressively drop all LRU entries whoserefs == 1.total_charge(&self) -> usize
The public API follows LevelDB's Cache interface style:
insertreturns aCacheHandlepointer for the client; the caller is responsible for callingreleasewhen done.lookupreturns a handle with an incremented refcount, or null if not found.releasedecrements the refcount and triggers deletion if the entry is no longer cached.eraseremoves an entry from the cache; it will be freed once no client holds it.
Internally, the helper functions ref_inner, unref_inner, finish_erase_inner, lru_append_node, and lru_remove_node enforce list and refcount consistency.
ShardedLRUCache
ShardedLRUCache composes multiple LRUCache instances to reduce lock contention:
- The total capacity is divided across
NUM_SHARDSbyper_shard = (capacity + NUM_SHARDS - 1)/NUM_SHARDS. hash_slice(s: &Slice) -> u32computes a 32-bit hash (vialeveldb_hash) and is used to derive the shard index.shard(hash_: u32) -> u32maps a hash to a shard index by taking the upperNUM_SHARD_BITSbits.
Public operations:
new(capacity: usize) -> Self– allocate shards and configure per-shard capacities.insert(&mut self, key_: &Slice, value: *mut c_void, charge: usize, deleter: fn(&Slice, *mut c_void) -> c_void) -> *mut CacheHandlelookup(&mut self, key_: &Slice) -> *mut CacheHandlerelease(&mut self, handle: *mut CacheHandle)– determines the shard from the handle's hash and forwards to that shard.erase(&mut self, key_: &Slice)– calculates hash from key and erases in the appropriate shard.value(&mut self, handle: *mut CacheHandle) -> *mut c_void– obtains the underlying stored pointer.new_id(&mut self) -> u64– monotonically increasing ID generation protected byRawMutex.prune(&mut self)– runsprune()on each shard.total_charge(&self) -> usize– aggregatetotal_chargeacross all shards.
The sharded design is crucial when this cache is used in high-concurrency database workloads, such as a Bitcoin node replaying or validating large blockchains.
Safety and Memory Model
This crate heavily uses raw pointers and manual allocation. Safety is established by a set of invariants enforced through assertions and debug checks:
- All
LRUHandlepointers must be properly aligned (verified vialru_debug_verify_listandunref_inner_list_contains). - Circular list invariants are asserted: for each node,
next.prev == nodeandprev.next == node. unref_innerasserts thatrefs_before > 0before decrementing.LRUCache::prune,LRUCache::drop, andHandleTable::resizecontain additional checks and safety breaks to avoid undefined behavior in case of corruption.
Because the interface exposes raw *mut CacheHandle and *mut c_void, it is intended for controlled internal usage inside bitcoin-rs and LevelDB-compatible infrastructure. It is not designed as an ergonomic public API for general Rust applications.
When using it directly, you must:
- Ensure that each acquired handle is released exactly once.
- Ensure that
valuepointers are valid for the lifetime expected by the cache, and that the provideddeletercorrectly deallocates or manages them. - Treat
Slicevalues as immutable while in use as keys in the cache.
API Sketch and Usage
The intended high-level usage pattern closely matches LevelDB's Cache interface.
Creating a sharded cache
use ;
use c_void;
// Capacity in some logical units (often bytes)
let capacity: usize = 64 * 1024 * 1024; // 64 MiB
// High-level: construct directly
let mut cache = new;
// Or, for FFI-style usage: obtain a raw Cache pointer
let raw_cache: *mut Cache = new_lru_cache;
Inserting and retrieving entries
Assuming you have a Slice type compatible with LevelDB semantics and a buffer stored elsewhere:
use c_void;
// `key` is your LevelDB-style Slice
let key: Slice = ...;
// Example value
let value: *mut c_void = Boxinto_raw as *mut c_void;
let charge: usize = 1; // cost of this entry
let handle = cache.insert;
assert!;
// Later: lookup by key
let handle2 = cache.lookup;
if !handle2.is_null
// Release the original insert reference when you're done
cache.release;
Erasing and pruning
// Erase a specific key; entry will be freed once no client holds it
cache.erase;
// Manually prune all LRU entries that are only referenced by the cache
cache.prune;
// Measure usage
let total = cache.total_charge;
Concurrency
ShardedLRUCache is designed for concurrent access:
- Each
LRUCacheshard owns aMutex<LRUCacheInner>; operations on distinct shards can proceed in parallel. - Shard selection is deterministic based on the high bits of the hash of the key (via
hash_sliceandshard). new_iduses aparking_lot::RawMutexto serialize ID allocation without interfering with cache operations.
Note that the API presented here mostly uses &mut self; in the broader bitcoin-rs ecosystem, the cache is typically wrapped and shared in a way that hides mutability details while preserving correctness.
Diagnostics and Debugging
The crate includes a rich set of debug utilities:
lru_debug_verify_inner(inner: &mut LRUCacheInner)– verifies structural invariants of the LRU and in-use lists.lru_debug_verify_list(head: *mut LRUHandle, list_name: &str)– checks pointer alignment and next/prev consistency.unref_inner_list_contains(head: *mut LRUHandle, target: *mut LRUHandle, list_name: &str) -> bool– linear search with alignment checks and cycle protection.
Additionally, many functions emit trace/debug/warn/error messages for instrumentation, which can be routed through the log ecosystem.
These facilities are useful for diagnosing subtle reference-counting or pointer errors when integrating with external code or when porting from other LevelDB variants.
Integration Notes
- Repository: This crate lives inside the
bitcoin-rsmonorepo: https://github.com/klebs6/bitcoin-rs - License: MIT
- Edition: Rust 2021
It is primarily intended as an internal component of a LevelDB-compatible storage subsystem for Bitcoin-related applications. While it can be reused elsewhere, you should treat it as a low-level, expert-only primitive: misuse can easily produce memory unsafety.
If you need a safe, generic LRU cache for application-level code, consider using a different crate that exposes a high-level, type-safe API.
Caveats
- The public interface is pointer-centric and
unsafe-oriented; misuse can cause undefined behavior. - The semantics explicitly mirror LevelDB, including details like
charge, double lists (lru_vsin_use_), and deletion rules. - The crate assumes the existence of
Slice,Cache,CacheHandle,NUM_SHARDS,NUM_SHARD_BITS, andleveldb_hash, provided elsewhere in thebitcoin-rsecosystem.
Despite these constraints, when used correctly within its intended environment, bitcoinleveldb-lru offers a highly efficient, production-grade LRU cache engine suitable for heavy blockchain and database workloads.