Skip to main content

Crate fastskip

Crate fastskip 

Source
Expand description

§fastskip — Lock-free arena-backed skip list for LSM-tree memtables

fastskip provides a concurrent, lock-free skip list backed by per-thread arena allocators. It is designed for use as the memtable in LSM-tree storage engines (RocksDB, LevelDB, CockroachDB style).

§Features

  • Lock-free writes: Multiple threads insert/delete concurrently using a single CAS at level 0, best-effort CAS at upper levels
  • Lock-free reads: Point lookups (get) walk the skip list without any locks or atomic read-modify-write
  • Snapshot isolation: Take a point-in-time snapshot that remains consistent even under concurrent inserts
  • Range scans: Cursor with lower-bound seek for prefix/range iteration
  • Safe lifecycle: seal() freezes the memtable for flushing and returns a fresh one for new writes — no unsafe code
  • Arena memory: Per-thread arena shards avoid allocation contention. Memory is bulk-reclaimed on drop (no per-node free overhead)

§Quick start

use fastskip::ConcurrentSkipList;

let sl = ConcurrentSkipList::new();

// Insert (multiple threads can call concurrently)
sl.insert(b"user:1001", b"alice");
sl.insert(b"user:1002", b"bob");

// Point lookup
let (val, tombstone) = sl.get(b"user:1001").unwrap();
assert_eq!(val, b"alice");
assert!(!tombstone);

// Delete (tombstone)
sl.delete(b"user:1002");
assert_eq!(sl.get_live(b"user:1002"), None);

// Iteration (sorted order)
for entry in sl.iter() {
    println!("{:?} -> {:?}", entry.key, entry.value);
}

§LSM memtable lifecycle

The intended lifecycle for an LSM-tree memtable:

use fastskip::ConcurrentSkipList;

// 1. Create active memtable
let memtable = ConcurrentSkipList::with_shards(4);

// 2. Concurrent writers insert/delete
memtable.insert(b"key1", b"val1");
memtable.delete(b"key2");

// 3. When full, seal it — returns frozen (for flushing) + fresh (for writes)
let (frozen, fresh) = memtable.seal().unwrap();

// 4. Flush frozen to SSTable (iterate snapshot)
for entry in frozen.iter() {
    if entry.is_tombstone {
        // write tombstone marker to SSTable
    } else {
        // write key-value to SSTable
    }
}

// 5. Drop frozen (reclaims arena memory)
std::mem::drop(frozen);

// 6. Fresh memtable is ready for new writes
fresh.insert(b"key3", b"val3");

§Concurrent reads

Readers are fully lock-free. For consistent point-in-time reads under concurrent writes, use a snapshot:

use fastskip::ConcurrentSkipList;

let sl = ConcurrentSkipList::new();
sl.insert(b"a", b"1");
sl.insert(b"b", b"2");

// Snapshot captures a sequence number — iterators skip post-snapshot inserts
let snap = sl.snapshot();

// Insert more after snapshot (won't appear in snapshot iteration)
sl.insert(b"c", b"3");

// Snapshot sees only "a" and "b"
assert_eq!(snap.iter().count(), 2);
// Live iterator sees all three
assert_eq!(sl.iter().count(), 3);

§Range scans with Cursor

use fastskip::ConcurrentSkipList;

let sl = ConcurrentSkipList::new();
sl.insert(b"apple", b"1");
sl.insert(b"banana", b"2");
sl.insert(b"cherry", b"3");
sl.insert(b"date", b"4");

// Seek to first key >= "banana"
if let Some(cursor) = sl.cursor_at(b"banana") {
    let keys: Vec<_> = cursor
        .filter(|e| !e.is_tombstone)
        .map(|e| e.key.to_vec())
        .collect();
    assert_eq!(keys, vec![b"banana".to_vec(), b"cherry".to_vec(), b"date".to_vec()]);
}

§Design notes

§Insert after delete

Inserting a key that was previously tombstoned in the same memtable returns false (duplicate). This is by design — the skip list maintains one node per key. To re-insert a deleted key, seal the memtable and write to a fresh one. The LSM-tree compaction process merges them later.

§Thread safety

The ConcurrentSkipList is Send + Sync and can be shared across threads via Arc. The arena uses per-thread shards (one writer thread per shard). Readers never allocate, so they are contention-free.

§Memory management

Arena memory is bulk-allocated in blocks and bulk-reclaimed when the ConcurrentSkipList is dropped. There is no per-node allocation or deallocation overhead. The seal() method creates a fresh arena for the new memtable, and dropping the FrozenMemtable reclaims the old one.

Re-exports§

pub use fastarena;

Structs§

ConcurrentSkipList
A lock-free, arena-backed skip list for LSM-tree memtables.
Cursor
A seekable forward cursor for range scans over the skip list.
Entry
A single key-value entry yielded by iterators and cursors.
FrozenMemtable
A sealed (read-only) memtable ready for flushing to disk.
Iter
Live iterator over the current skip list state.
Snapshot
A point-in-time snapshot of the skip list.
SnapshotIter
Iterator over a point-in-time snapshot of the skip list.

Enums§

BatchError
Error for batch operations.
InsertError
Error returned by ConcurrentSkipList::try_insert.
SealError
Error returned by ConcurrentSkipList::seal.