# Memento
A flexible caching library with tiered storage and cost-based placement.
## Features
- **Tiered Storage**: Local (Memory), Shared (Redis), and Durable (Postgres) tiers
- **Cost-Based Placement**: Automatic tier selection based on computation cost
- **Zero-Config**: Works out of the box with just memory caching
- **Structured Keys**: Build cache keys from segments with wildcard invalidation support
- **Negative Caching**: Automatically caches `None` results to prevent repeated lookups
- **Negative TTL Override**: Different TTL for `None` results vs actual values
- **Stampede Protection**: Local coalescing of concurrent cache misses
- **Size Tracking**: Entry size observation for future placement decisions
- **TTL Support**: Configurable time-to-live per operation or globally
- **LRU Eviction**: Memory tier uses LRU eviction to stay within limits
## Quick Start
```rust
use memento_cache::{Cache, TieredPlanBuilder};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Zero-config: memory only
let cache = Cache::new(TieredPlanBuilder::new().build()?);
// Cache a value with a structured key
let user: Option<String> = cache
.key(["users", "123"])
.get(|| async { Some("Alice".to_string()) })
.await?;
// Invalidate with wildcards
cache.key(["users", "*"]).invalidate().await?;
Ok(())
}
```
## Tiered Caching
### Memory Only (Zero-Config)
Fast, in-process caching. Best for single-instance applications.
```rust
use memento_cache::{Cache, TieredPlanBuilder};
let cache = Cache::new(TieredPlanBuilder::new().build()?);
```
### Memory with Size Limits
Configure the local memory store with size-based limits using the `ByteSize` trait:
```rust
use memento_cache::{Cache, TieredPlanBuilder, MemoryStore, ByteSize};
use std::time::Duration;
let local = MemoryStore::builder()
.max_bytes(50.mb()) // Total cache size limit (default: 64 MB)
.max_entry_size(512.kb()) // Per-entry size limit
.default_ttl(Duration::from_secs(300))
.build();
let cache = Cache::new(
TieredPlanBuilder::new()
.local(local)
.build()?
);
```
The `ByteSize` trait provides convenient size conversions:
- `50.mb()` → 50 megabytes
- `512.kb()` → 512 kilobytes
- `1.gb()` → 1 gigabyte
- `1024.bytes()` → 1024 bytes
When limits are exceeded, LRU eviction removes the oldest entries. Entries exceeding `max_entry_size` are silently dropped.
### With Redis (Shared Tier)
Distributed caching for multi-instance deployments.
```rust
use memento_cache::{Cache, TieredPlanBuilder, RedisStore};
let cache = Cache::new(
TieredPlanBuilder::new()
.shared(RedisStore::new("redis://localhost:6379")?)
.build()?
);
```
### With Postgres (Durable Tier)
Persistent caching that survives restarts. Schema must be created via migrations.
> **Note:** `PostgresBuilder::build()` is async (it may validate schema). `TieredPlanBuilder::build()` is synchronous.
```rust
use memento_cache::{Cache, TieredPlanBuilder, PostgresBuilder};
use sqlx::PgPool;
// Table must exist (use migration to create it)
let pg_store = PostgresBuilder::new(pool)
.table("memento_cache")
.build()
.await?;
let cache = Cache::new(
TieredPlanBuilder::new()
.durable(pg_store)
.build()?
);
```
### Full Tiered Setup
Memory + Redis + Postgres with automatic tier selection.
```rust
use memento_cache::{Cache, TieredPlanBuilder, RedisStore, PostgresBuilder};
use sqlx::PgPool;
let pg_store = PostgresBuilder::new(pool)
.table("memento_cache")
.build()
.await?;
let cache = Cache::new(
TieredPlanBuilder::new()
.shared(RedisStore::new("redis://localhost:6379")?)
.durable(pg_store)
.build()?
);
```
### Tier Promotion TTLs
When entries are read from slower tiers, they're promoted to faster tiers. Configure the default TTLs for promoted entries:
```rust
use memento_cache::{Cache, TieredPlanBuilder, TierTTLs};
use std::time::Duration;
// Configure individual TTLs
let cache = Cache::new(
TieredPlanBuilder::new()
.local_ttl(Duration::from_secs(30)) // Local promotion TTL (default: 60s)
.shared_ttl(Duration::from_secs(600)) // Shared promotion TTL (default: 300s)
.build()?
);
// Or use TierTTLs directly
let cache = Cache::new(
TieredPlanBuilder::new()
.tier_ttls(TierTTLs::new(
Duration::from_secs(30), // Local
Duration::from_secs(600), // Shared
))
.build()?
);
```
**Promotion behavior:**
- **Shared → Local**: Entry promoted with `local_ttl` (default: 60s)
- **Durable → Shared**: Entry promoted with `shared_ttl` (default: 300s)
- **Durable → Local**: Entry promoted with `min(remaining_ttl, local_ttl)`
If the original entry has a remaining TTL, the promotion uses `min(remaining_ttl, tier_ttl)` to prevent stale data.
## Cost-Based Placement
Use cost hints to control which tier stores your data:
```rust
use memento_cache::{Cache, TieredPlanBuilder, CacheCost, CacheExpires};
use std::time::Duration;
let cache = Cache::new(TieredPlanBuilder::new().build()?);
// Cheap: always local (fast to recompute)
let result: String = cache.key(["transform", &id])
.cost_hint(CacheCost::Cheap)
.get(|| async { simple_transform().await })
.await?;
// Moderate: shared if available (default)
let result: User = cache.key(["users", &user_id])
.get(|| async { fetch_user().await })
.await?;
// Expensive: durable if available
let result: Inference = cache.key(["ml", "inference", &model_id])
.cost_hint(CacheCost::Expensive)
.expires(CacheExpires::After(Duration::from_secs(3600)))
.get(|| async { run_ml_inference().await })
.await?;
```
### Default Tier Resolution
| Cheap | Local | Local | Local |
| Moderate | Shared | Shared | Local |
| Expensive | Durable | Shared | Local |
## Custom Tier Plans
Override the default cost-to-tier mapping:
```rust
use memento_cache::{Cache, TieredPlanBuilder, RedisStore, CacheCost, Tier};
let cache = Cache::new(
TieredPlanBuilder::new()
.shared(RedisStore::new("redis://localhost:6379")?)
.plan(CacheCost::Moderate, Tier::Local) // Keep moderate in memory
.plan(CacheCost::Expensive, Tier::Shared) // Use Redis for expensive
.build()?
);
```
## Key Building
Keys are built from segments joined by `:`. Use `"*"` for wildcard invalidation.
```rust
// Simple key: "users:123"
cache.key(["users", "123"]).get(...).await?;
// Compound key: "users:123:posts:456"
cache.key(["users", "123", "posts", "456"]).get(...).await?;
// Wildcard invalidation: "users:123:*"
cache.key(["users", "123", "*"]).invalidate().await?;
```
### Namespaces
Use `cache.namespace()` to create isolated cache regions. All keys are automatically prefixed with the namespace.
```rust
// Create a namespaced cache
let audio = cache.namespace("audio");
// Key becomes "audio:convert:123"
audio.key(["convert", "123"]).get(...).await?;
// Invalidate all audio cache entries
audio.key(["*"]).invalidate().await?;
// Nested namespaces
let transcoding = audio.namespace("transcoding");
// Key becomes "audio:transcoding:job:456"
transcoding.key(["job", "456"]).get(...).await?;
```
Namespaces are useful for:
- **Domain isolation**: Separate cache entries by feature (`audio`, `video`, `users`)
- **Bulk invalidation**: Clear all entries for a domain with a single wildcard
- **Code organization**: Pass namespaced caches to modules without exposing the full cache
### Versioning
Use `.version()` to add a version to a namespace. This is useful for cache invalidation when data formats change.
```rust
// Version 1 of the audio cache
let audio_v1 = cache.namespace("audio").version("v1");
// Key becomes "audio@v1:convert:123"
audio_v1.key(["convert", "123"]).get(...).await?;
// Later, when format changes, bump the version
let audio_v2 = cache.namespace("audio").version("v2");
// Key becomes "audio@v2:convert:123" (different from v1)
audio_v2.key(["convert", "123"]).get(...).await?;
// Old v1 entries are now orphaned (won't be read)
// Optionally clean them up:
audio_v1.key(["*"]).invalidate().await?;
```
Versioning is useful for:
- **Schema migrations**: Bump version when serialization format changes
- **Safe rollouts**: New version reads fresh data, old version still works
- **Lazy invalidation**: Old entries expire naturally or can be cleaned up later
### Safe Keys with `KeyPart::hash()`
For unbounded values like URLs, file paths, or user input, use `KeyPart::hash()` to create bounded, storage-safe key segments:
```rust
use memento_cache::{Cache, TieredPlanBuilder, KeyPart, CacheCost, CacheExpires};
let cache = Cache::new(TieredPlanBuilder::new().build()?);
// Hash unbounded input (URL) to create a safe key
// String literals convert to KeyPart automatically via From<&str>
let url = "https://example.com/very/long/path?with=params&and=more";
let mp3 = cache
.key(["convertPCMtoMP3", KeyPart::hash(url)])
.cost_hint(CacheCost::Expensive)
.expires(CacheExpires::Never)
.get(|| async { convert_pcm_to_mp3(url).await })
.await?;
// With prefix for debugging (shows "url_<hash>" in key)
let data = cache
.key(["audio", KeyPart::hash_with_prefix("url", url)])
.get(|| async { fetch_audio(url).await })
.await?;
```
#### Why Use `KeyPart::hash()`?
- **Bounded length**: URLs and file paths can be arbitrarily long, but hashes are always 64 characters
- **Storage-safe**: No special characters that might break Redis or Postgres
- **Deterministic**: Same input always produces the same hash
- **Explicit**: No magic auto-hashing; you control what gets hashed
#### Design Rules
- **Never** auto-hash literals - `"users"` stays as `"users"`
- **Never** hash entire keys - only individual segments
- **Always** explicit - use `KeyPart::hash()` when you need hashing
- Wildcards are **always** literals - `"*"` is never hashed
## Expiration
```rust
use memento_cache::CacheExpires;
use std::time::Duration;
// Default TTL (5 minutes)
cache.key(["data"]).get(...).await?;
// Custom TTL
cache.key(["hot", "data"])
.expires(CacheExpires::After(Duration::from_secs(30)))
.get(...).await?;
// Never expire (automatically persisted to Postgres if available)
cache.key(["permanent", "config"])
.expires(CacheExpires::Never)
.get(...).await?;
```
## Compression
Enable automatic gzip compression for large entries:
```rust
let data = cache
.key(["large", "data"])
.compressed() // Enable compression
.get(|| async { fetch_large_data().await })
.await?;
```
**Compression behavior:**
- Only applies to entries larger than 1KB (1024 bytes)
- Uses gzip compression
- Only stores compressed if it reduces size
- Transparent decompression on read
### Automatic Persistence for `CacheExpires::Never`
When you use `CacheExpires::Never`, entries are **automatically routed to the Durable (Postgres) tier** if available, regardless of the cost hint. This ensures permanent entries survive application restarts.
| `Never` | **Durable** (automatic) | Falls back to cost-based tier |
| `After(...)` | Cost-based tier | Cost-based tier |
| `Default` | Cost-based tier | Cost-based tier |
This behavior makes semantic sense: if something should "never expire," it should also survive restarts.
## Negative Caching with TTL Override
The `get()` method automatically handles `Option<T>` types with proper negative caching. You can set a different TTL for `None` results using `negative_ttl()`:
```rust
use memento_cache::{Cache, TieredPlanBuilder, CacheExpires};
use std::time::Duration;
let cache = Cache::new(TieredPlanBuilder::new().build()?);
// Cache user lookups with shorter TTL for "not found" results
let user: Option<User> = cache
.key(["user", &user_id])
.expires(CacheExpires::After(Duration::from_secs(300))) // 5 min for found users
.negative_ttl(Duration::from_secs(10)) // 10 sec for "not found"
.get(|| async { fetch_user(&user_id).await })
.await?;
```
This is useful for:
- **Preventing hot-miss storms**: Cache "not found" results briefly
- **Protecting upstream services**: Avoid hammering databases for non-existent records
- **Different invalidation needs**: "Not found" may change faster than existing data
> **Note:** When using `CacheExpires::Never` without `negative_ttl()`, negative cache entries (None results) will also never expire. If you want "not found" results to expire sooner than permanent data, always set `negative_ttl()` explicitly when using `CacheExpires::Never`.
## Stampede Protection
Memento includes local stampede protection that coalesces concurrent cache misses within the same process. When multiple requests hit the same cache miss simultaneously, only one fetch is performed and the result is shared.
```rust
// These concurrent requests will only trigger ONE fetch
let handles: Vec<_> = (0..100)
.map(|_| {
let cache = cache.clone();
tokio::spawn(async move {
cache
.key(["expensive", "computation"])
.get(|| async {
// Only called once, even with 100 concurrent requests
expensive_computation().await
})
.await
})
})
.collect();
```
**Important notes:**
- Stampede protection is **local only** (per-process)
- No distributed locking or coordination
- Best-effort: some edge cases may still trigger duplicate fetches
- Failures wake all waiters with the error
## Architecture
### Tiers
1. **Local (Memory)**: Always present, fastest, in-process only
2. **Shared (Redis)**: Optional, distributed across instances
3. **Durable (Postgres)**: Optional, persistent storage
### Read Order
Reads always check tiers in order: Local → Shared → Durable
When data is found in a lower tier (e.g., Durable), it's automatically promoted to higher tiers (Local, Shared) with appropriate TTLs.
### Write Behavior
Writes go to the base tier determined by cost and expiration:
- **`CacheExpires::Never`** → Durable (if available), regardless of cost
- **Cheap** → Local only
- **Moderate** → Shared (if available), else Local
- **Expensive** → Durable (if available), else Shared, else Local
### Promotion Rules
Reads from slower tiers may be promoted to faster tiers with bounded TTLs:
- Direction: Durable → Shared → Local (toward faster tiers)
- Only if the target tier exists
- Only for TTL-bounded entries
- Never authoritative (source of truth remains in the base tier)
## Invariants
The tiered cache system guarantees:
1. **Memory always exists**: Local tier is always available
2. **Invalid plans error**: Referencing non-existent tiers fails at build time
3. **Graceful degradation**: Missing tiers fall back to available ones
4. **Durable never used if absent**: No accidental Durable references
5. **Local never authoritative**: Source of truth is always the highest configured tier
## Postgres Schema
The Postgres store requires a table to be created via migrations. The store **never** creates or modifies schema at runtime.
### Migration
Create a new migration file (e.g., `migrations/YYYYMMDDHHMMSS_create_memento_cache.sql`):
```sql
-- Memento cache table for durable caching
-- This table stores cache entries that need to persist across restarts
CREATE TABLE IF NOT EXISTS public.memento_cache (
key TEXT PRIMARY KEY,
value BYTEA,
is_none BOOLEAN NOT NULL,
expires_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CHECK (
(is_none = TRUE AND value IS NULL)
OR (is_none = FALSE AND value IS NOT NULL)
)
);
-- Index for efficient cleanup of expired entries
-- Partial index excludes permanent entries (expires_at IS NULL)
CREATE INDEX IF NOT EXISTS idx_memento_cache_expires
ON public.memento_cache (expires_at)
WHERE expires_at IS NOT NULL;
-- Documentation
COMMENT ON TABLE public.memento_cache IS
'Memento durable cache storage for expensive computations';
COMMENT ON COLUMN public.memento_cache.key IS
'Cache key (colon-separated segments)';
COMMENT ON COLUMN public.memento_cache.value IS
'Serialized cache value (bincode), NULL if negative cache entry';
COMMENT ON COLUMN public.memento_cache.is_none IS
'True if this is a negative cache entry (None result)';
COMMENT ON COLUMN public.memento_cache.expires_at IS
'When this entry expires (NULL = never)';
COMMENT ON COLUMN public.memento_cache.created_at IS
'When this entry was created';
```
Run the migration using your preferred tool (sqlx, diesel, etc.):
```bash
# Using sqlx
sqlx migrate run
# Using psql directly
psql $DATABASE_URL -f migrations/YYYYMMDDHHMMSS_create_memento_cache.sql
```
### Custom Table Name
If you need a different table name or schema:
```sql
-- Custom schema and table name
CREATE TABLE myapp.cache_entries (
-- same columns as above
);
CREATE INDEX idx_cache_entries_expires
ON myapp.cache_entries (expires_at)
WHERE expires_at IS NOT NULL;
```
Then configure the builder:
```rust
let pg_store = PostgresBuilder::new(pool)
.schema("myapp")
.table("cache_entries")
.build()
.await?;
```
### Schema Design
| `key` | `TEXT PRIMARY KEY` | Cache key (colon-separated segments) |
| `value` | `BYTEA` | Bincode-serialized value, NULL for negative cache |
| `is_none` | `BOOLEAN NOT NULL` | True if this is a negative cache entry |
| `expires_at` | `TIMESTAMPTZ` | Expiration time, NULL = never expires |
| `created_at` | `TIMESTAMPTZ NOT NULL` | When entry was created |
**Constraints:**
- `CHECK` constraint enforces: `is_none = TRUE ⟺ value IS NULL`
- Partial index on `expires_at` excludes permanent entries for efficient cleanup
### Maintenance
Expired entries are filtered on read, but you may want periodic cleanup:
```sql
-- Manual cleanup (run periodically via cron or pg_cron)
DELETE FROM public.memento_cache
WHERE expires_at IS NOT NULL AND expires_at < NOW();
```
Or use the built-in cleanup method:
```rust
// Returns number of entries removed
let removed = pg_store.cleanup_expired().await?;
```
## Legacy API
> ⚠️ **Deprecated**: The legacy strategy-based API is maintained for backwards compatibility only. New applications should use the tiered API shown above.
```rust
use memento_cache::{Cache, MemoryCacheStrategy, RedisCacheStrategy, TieredCacheStrategy};
// Memory only (old API)
let cache = Cache::default();
// Redis (old API)
let cache = Cache::new(RedisCacheStrategy::new("redis://localhost:6379")?);
// Tiered (old API)
let cache = Cache::new(TieredCacheStrategy::new("redis://localhost:6379")?);
```
## License
MIT