armdb 0.1.13

sharded bitcask key-value storage optimized for NVMe
Documentation
# Known Caveats

## CAS and update block shard on I/O

`VarTree::cas`, `VarTree::update`, `VarMap::cas`, and `VarMap::update` hold the
shard mutex while reading the current value (`read_value_locked`). On a
block-cache miss this means a `pread` syscall happens under the lock — all
writes (and other CAS/update operations) targeting the same shard are blocked
until the read completes.

`ConstTree` and `ConstMap` variants of `cas` and `update` are **not** affected
because values are stored inline in the index (no disk I/O needed).

**Impact:** Under heavy CAS/update load with poor cache hit rate, write latency
for the affected shard can spike to disk-read latency.

**Mitigation:** Pre-warm the block cache (`warmup()`) before entering
CAS/update-heavy workloads, or increase cache capacity so working-set blocks
stay resident.

## migrate() memory for HashMap trees

`ConstMap::migrate` and `VarMap::migrate` collect all keys of each shard into a
`Vec` before iterating, because the shard index lock cannot be held across
`put`/`delete` calls (which re-acquire it).

For shards with hundreds of millions of keys this causes a transient memory
spike of `O(keys_per_shard * K)` bytes.

`ConstTree::migrate` and `VarTree::migrate` walk the lock-free SkipList
directly and are **not** affected.

**Mitigation:** If memory is tight, consider migrating in batches per shard
prefix range or switching to a SkipList-based tree type for the migration step.