Expand description
Process-wide lockfree-ish cache of loaded VelesIndexes.
Shared between the MCP and gRPC servers (and anything else that
wants to serve searches from in-memory indexes without re-walking
<repo>/.veles per request).
Design notes:
-
Storage:
DashMap<String, CacheEntry>— sharded internal locks, so concurrent operations on different repos never contend. The “lockfree” label is the practical kind: contention is bounded to a single shard, not the whole map. -
Per-index synchronization: each cache entry holds an
Arc<RwLock<VelesIndex>>. Read-only operations (search, defs, refs, …) take a shared read lock;update_from_pathtakes the exclusive write lock. Two clients searching the same repo proceed in parallel; anupdatebriefly blocks readers. -
Build deduplication: a
OnceCelllives inside every entry, so several concurrent loaders of the same repo cooperate — one thread does the (slow) walk + embed + load, the others await its result. No wasted duplicate builds. -
LRU eviction: each entry stores an
AtomicU64 last_access. Every hit / miss bumps a global counter and writes it into the entry. When the cache exceeds capacity we scan and evict the smallest. O(N) but N is small (≤ 16 in practice).
Tests assume the eviction is “eventually correct”, not strictly LRU
under contention — two threads concurrently bumping last_access
on different entries may finish in arbitrary order. For the actual
workload (10-slot cache, ~tens of repos per session) this is fine.
Structs§
- Index
Cache - Lockfree-ish process cache of loaded indexes.
Constants§
- DEFAULT_
CACHE_ SIZE - How many
VelesIndexentries the cache keeps before evicting LRU.