accelerator - Multi-Level Cache Runtime for Rust
Accelerator is a pluggable, async-first cache runtime for high-concurrency Rust services. It provides a unified API over local cache (L1) and remote cache (L2), with miss load (source-of-truth load), batch loading, invalidation broadcast, and built-in observability.
🚀 Features
- 🧭 Multi-level modes:
Local,Remote,Both - 🧱 Default backends:
moka(L1) +redis(L2) - 🔌 Pluggable backends via traits:
LocalBackend<V>RemoteBackend<V>InvalidationSubscriber
- 🍱 Unified runtime APIs:
get,mget,set,mset,del,mdel,warmup
- 📥 Loader contracts:
- single-key:
Loader<K, V> - batch:
MLoader<K, V>
- single-key:
- 🛡️ Miss handling:
- single-key miss can use singleflight dedup (
penetration_protect) - batch miss (
mget) usesMLoader::mloaddirectly
- single-key miss can use singleflight dedup (
- 🔄 Resilience and stability:
- negative cache (
cache_null_value,null_ttl) - TTL jitter (
ttl_jitter_ratio) - refresh-ahead (
refresh_ahead) - stale fallback (
stale_on_error)
- negative cache (
- 📡 Cross-instance local cache consistency:
- Redis Pub/Sub invalidation broadcast
- 👀 Observability:
- runtime counters (
metrics_snapshot) - diagnostic state (
diagnostic_snapshot) - OTel-friendly metric points (
otel_metric_points) - tracing spans on core paths
- runtime counters (
- 🪄 Procedural macros:
cacheable,cacheable_batch,cache_put,cache_evict,cache_evict_batch
📋 Table of Contents
- 📦 Installation
- 🤠 Quick Start
- 🍱 API Overview
- 🧩 Macro Usage
- 🏗️ Backend Extension
- 🪄 Examples
- 🏎️ Benchmark and Regression Gate
- 🧪 Integration Tests
- 🧰 Local Full Stack
- 📚 Documentation
📦 Installation
Use from crates.io (recommended):
[]
= "0.1.0"
= { = "1", = ["macros", "rt-multi-thread"] }
🤠 Quick Start
Local-only cache (L1 = moka)
use Duration;
use LevelCacheBuilder;
use CacheMode;
use ;
async
Two-level cache (L1 + L2)
use Duration;
use LevelCacheBuilder;
use CacheMode;
use ;
let local_backend = .max_capacity.build?;
let remote_backend =
.url
.key_prefix
.build?;
let cache = new
.area
.mode
.local
.remote
.local_ttl
.remote_ttl
.broadcast_invalidation
.build?;
🍱 API Overview
Core runtime:
LevelCache<K, V, LD, LB, RB>ReadOptions { allow_stale, disable_load }CacheMode::{Local, Remote, Both}
Main methods:
- Read:
get,mget - Write:
set,mset - Invalidate:
del,mdel - Warmup:
warmup
Diagnostics and metrics:
metrics_snapshot() -> CacheMetricsSnapshotdiagnostic_snapshot() -> CacheDiagnosticSnapshototel_metric_points() -> Vec<OtelMetricPoint>
Loader traits:
Loader<K, V>::load(&K) -> Future<CacheResult<Option<V>>>MLoader<K, V>::mload(&[K]) -> Future<CacheResult<HashMap<K, Option<V>>>>
🧩 Macro Usage
Import macros from:
use ;
Macro behavior and constraints:
#[cacheable(...)]: cache-first read, miss executes function body, thensetwrite-back.#[cacheable_batch(...)]:mgetfirst, loads misses only, thenmsetwrite-back.#[cache_put(...)]: executes function first, thensetto cache on success.#[cache_evict(...)]/#[cache_evict_batch(...)]: invalidates after success by default (before = false).- Macros only support
async fnmethods onimplblocks (&self/&mut selfreceiver). on_cache_errorsupports"ignore"(default) or"propagate".
Minimal single-key example:
use ;
Minimal batch example:
use ;
use HashMap;
Runnable references:
examples/macro_best_practice.rsexamples/macro_batch_best_practice.rs
🏗️ Backend Extension
To replace default backends:
- Implement
LocalBackend<V>for your local cache. - Implement
RemoteBackend<V>andInvalidationSubscriberfor your remote cache. - Plug them into
LevelCacheBuilder::local(...)andLevelCacheBuilder::remote(...).
The runtime uses static dispatch (generics), not runtime dyn objects.
🪄 Examples
See examples/:
fixed_backend_best_practice.rs(moka + redis)macro_best_practice.rs(macro-based single-key flow)macro_batch_best_practice.rs(macro-based batch flow)clickstack_otlp.rs(optional OTLP bootstrap, featureotlp)
Run:
If Redis is unavailable at ACCELERATOR_REDIS_URL (default redis://127.0.0.1:6379),
the example exits gracefully.
🏎️ Benchmark and Regression Gate
One-click script:
Raw commands:
ACCELERATOR_BENCH_REDIS_URL=redis://127.0.0.1:0
Detailed playbook: docs/performance-engineering-playbook.md
🧪 Integration Tests
Redis integration tests are in tests/redis_integration.rs.
- They run with
cargo test. - If Redis is unavailable, tests skip gracefully where designed.
- Override endpoint with
ACCELERATOR_TEST_REDIS_URL.
🧰 Local Full Stack
Start local stack:
Run end-to-end tests:
stack_integration uses real sqlx + Postgres loader flow.
ClickStack UI: http://127.0.0.1:8080
OTLP ingest ports: 4317 and 4318
📚 Documentation
English is the default documentation language. Chinese versions are maintained under docs/zh/.
| Topic | English | 中文(简体) |
|---|---|---|
| README | README.md |
README.zh-CN.md |
| Terminology Baseline | docs/terminology.md |
docs/zh/terminology.zh-CN.md |
| Capability Model | docs/multi-level-cache-capability-model.md |
docs/zh/multi-level-cache-capability-model.zh-CN.md |
| Performance Playbook | docs/performance-engineering-playbook.md |
docs/zh/performance-engineering-playbook.zh-CN.md |
| Cache Ops Runbook | docs/cache-ops-runbook.md |
docs/zh/cache-ops-runbook.zh-CN.md |
| Local Stack Guide | docs/local-stack-integration.md |
docs/zh/local-stack-integration.zh-CN.md |
| Code Flattening Guideline | docs/code-flattening-guideline.md |
docs/zh/code-flattening-guideline.zh-CN.md |