Memento
A flexible caching library with tiered storage and cost-based placement.
Features
- Tiered Storage: Local (Memory), Shared (Redis), and Durable (Postgres) tiers
- Cost-Based Placement: Automatic tier selection based on computation cost
- Zero-Config: Works out of the box with just memory caching
- Structured Keys: Build cache keys from segments with wildcard invalidation support
- Negative Caching: Automatically caches
Noneresults to prevent repeated lookups - Negative TTL Override: Different TTL for
Noneresults vs actual values - Stampede Protection: Local coalescing of concurrent cache misses
- Size Tracking: Entry size observation for future placement decisions
- TTL Support: Configurable time-to-live per operation or globally
- LRU Eviction: Memory tier uses LRU eviction to stay within limits
Quick Start
use ;
async
Tiered Caching
Memory Only (Zero-Config)
Fast, in-process caching. Best for single-instance applications.
use ;
let cache = new;
Memory with Size Limits
Configure the local memory store with size-based limits using the ByteSize trait:
use ;
use Duration;
let local = builder
.max_bytes // Total cache size limit (default: 64 MB)
.max_entry_size // Per-entry size limit
.default_ttl
.build;
let cache = new;
The ByteSize trait provides convenient size conversions:
50.mb()→ 50 megabytes512.kb()→ 512 kilobytes1.gb()→ 1 gigabyte1024.bytes()→ 1024 bytes
When limits are exceeded, LRU eviction removes the oldest entries. Entries exceeding max_entry_size are silently dropped.
With Redis (Shared Tier)
Distributed caching for multi-instance deployments.
use ;
let cache = new;
With Postgres (Durable Tier)
Persistent caching that survives restarts. Schema must be created via migrations.
Note:
PostgresBuilder::build()is async (it may validate schema).TieredPlanBuilder::build()is synchronous.
use ;
use PgPool;
// Table must exist (use migration to create it)
let pg_store = new
.table
.build
.await?;
let cache = new;
Full Tiered Setup
Memory + Redis + Postgres with automatic tier selection.
use ;
use PgPool;
let pg_store = new
.table
.build
.await?;
let cache = new;
Tier Promotion TTLs
When entries are read from slower tiers, they're promoted to faster tiers. Configure the default TTLs for promoted entries:
use ;
use Duration;
// Configure individual TTLs
let cache = new;
// Or use TierTTLs directly
let cache = new;
Promotion behavior:
- Shared → Local: Entry promoted with
local_ttl(default: 60s) - Durable → Shared: Entry promoted with
shared_ttl(default: 300s) - Durable → Local: Entry promoted with
min(remaining_ttl, local_ttl)
If the original entry has a remaining TTL, the promotion uses min(remaining_ttl, tier_ttl) to prevent stale data.
Cost-Based Placement
Use cost hints to control which tier stores your data:
use ;
use Duration;
let cache = new;
// Cheap: always local (fast to recompute)
let result: String = cache.key
.cost_hint
.get
.await?;
// Moderate: shared if available (default)
let result: User = cache.key
.get
.await?;
// Expensive: durable if available
let result: Inference = cache.key
.cost_hint
.expires
.get
.await?;
Default Tier Resolution
| Cost | With Durable | With Shared | Memory Only |
|---|---|---|---|
| Cheap | Local | Local | Local |
| Moderate | Shared | Shared | Local |
| Expensive | Durable | Shared | Local |
Custom Tier Plans
Override the default cost-to-tier mapping:
use ;
let cache = new;
Key Building
Keys are built from segments joined by :. Use "*" for wildcard invalidation.
// Simple key: "users:123"
cache.key.get.await?;
// Compound key: "users:123:posts:456"
cache.key.get.await?;
// Wildcard invalidation: "users:123:*"
cache.key.invalidate.await?;
Namespaces
Use cache.namespace() to create isolated cache regions. All keys are automatically prefixed with the namespace.
// Create a namespaced cache
let audio = cache.namespace;
// Key becomes "audio:convert:123"
audio.key.get.await?;
// Invalidate all audio cache entries
audio.key.invalidate.await?;
// Nested namespaces
let transcoding = audio.namespace;
// Key becomes "audio:transcoding:job:456"
transcoding.key.get.await?;
Namespaces are useful for:
- Domain isolation: Separate cache entries by feature (
audio,video,users) - Bulk invalidation: Clear all entries for a domain with a single wildcard
- Code organization: Pass namespaced caches to modules without exposing the full cache
Versioning
Use .version() to add a version to a namespace. This is useful for cache invalidation when data formats change.
// Version 1 of the audio cache
let audio_v1 = cache.namespace.version;
// Key becomes "audio@v1:convert:123"
audio_v1.key.get.await?;
// Later, when format changes, bump the version
let audio_v2 = cache.namespace.version;
// Key becomes "audio@v2:convert:123" (different from v1)
audio_v2.key.get.await?;
// Old v1 entries are now orphaned (won't be read)
// Optionally clean them up:
audio_v1.key.invalidate.await?;
Versioning is useful for:
- Schema migrations: Bump version when serialization format changes
- Safe rollouts: New version reads fresh data, old version still works
- Lazy invalidation: Old entries expire naturally or can be cleaned up later
Safe Keys with KeyPart::hash()
For unbounded values like URLs, file paths, or user input, use KeyPart::hash() to create bounded, storage-safe key segments:
use ;
let cache = new;
// Hash unbounded input (URL) to create a safe key
// String literals convert to KeyPart automatically via From<&str>
let url = "https://example.com/very/long/path?with=params&and=more";
let mp3 = cache
.key
.cost_hint
.expires
.get
.await?;
// With prefix for debugging (shows "url_<hash>" in key)
let data = cache
.key
.get
.await?;
Why Use KeyPart::hash()?
- Bounded length: URLs and file paths can be arbitrarily long, but hashes are always 64 characters
- Storage-safe: No special characters that might break Redis or Postgres
- Deterministic: Same input always produces the same hash
- Explicit: No magic auto-hashing; you control what gets hashed
Design Rules
- Never auto-hash literals -
"users"stays as"users" - Never hash entire keys - only individual segments
- Always explicit - use
KeyPart::hash()when you need hashing - Wildcards are always literals -
"*"is never hashed
Expiration
use CacheExpires;
use Duration;
// Default TTL (5 minutes)
cache.key.get.await?;
// Custom TTL
cache.key
.expires
.get.await?;
// Never expire (automatically persisted to Postgres if available)
cache.key
.expires
.get.await?;
Compression
Enable automatic gzip compression for large entries:
let data = cache
.key
.compressed // Enable compression
.get
.await?;
Compression behavior:
- Only applies to entries larger than 1KB (1024 bytes)
- Uses gzip compression
- Only stores compressed if it reduces size
- Transparent decompression on read
Automatic Persistence for CacheExpires::Never
When you use CacheExpires::Never, entries are automatically routed to the Durable (Postgres) tier if available, regardless of the cost hint. This ensures permanent entries survive application restarts.
| Expiration | With Durable | Without Durable |
|---|---|---|
Never |
Durable (automatic) | Falls back to cost-based tier |
After(...) |
Cost-based tier | Cost-based tier |
Default |
Cost-based tier | Cost-based tier |
This behavior makes semantic sense: if something should "never expire," it should also survive restarts.
Negative Caching with TTL Override
The get() method automatically handles Option<T> types with proper negative caching. You can set a different TTL for None results using negative_ttl():
use ;
use Duration;
let cache = new;
// Cache user lookups with shorter TTL for "not found" results
let user: = cache
.key
.expires // 5 min for found users
.negative_ttl // 10 sec for "not found"
.get
.await?;
This is useful for:
- Preventing hot-miss storms: Cache "not found" results briefly
- Protecting upstream services: Avoid hammering databases for non-existent records
- Different invalidation needs: "Not found" may change faster than existing data
Note: When using
CacheExpires::Neverwithoutnegative_ttl(), negative cache entries (None results) will also never expire. If you want "not found" results to expire sooner than permanent data, always setnegative_ttl()explicitly when usingCacheExpires::Never.
Stampede Protection
Memento includes local stampede protection that coalesces concurrent cache misses within the same process. When multiple requests hit the same cache miss simultaneously, only one fetch is performed and the result is shared.
// These concurrent requests will only trigger ONE fetch
let handles: =
.map
.collect;
Important notes:
- Stampede protection is local only (per-process)
- No distributed locking or coordination
- Best-effort: some edge cases may still trigger duplicate fetches
- Failures wake all waiters with the error
Architecture
Tiers
- Local (Memory): Always present, fastest, in-process only
- Shared (Redis): Optional, distributed across instances
- Durable (Postgres): Optional, persistent storage
Read Order
Reads always check tiers in order: Local → Shared → Durable
When data is found in a lower tier (e.g., Durable), it's automatically promoted to higher tiers (Local, Shared) with appropriate TTLs.
Write Behavior
Writes go to the base tier determined by cost and expiration:
CacheExpires::Never→ Durable (if available), regardless of cost- Cheap → Local only
- Moderate → Shared (if available), else Local
- Expensive → Durable (if available), else Shared, else Local
Promotion Rules
Reads from slower tiers may be promoted to faster tiers with bounded TTLs:
- Direction: Durable → Shared → Local (toward faster tiers)
- Only if the target tier exists
- Only for TTL-bounded entries
- Never authoritative (source of truth remains in the base tier)
Invariants
The tiered cache system guarantees:
- Memory always exists: Local tier is always available
- Invalid plans error: Referencing non-existent tiers fails at build time
- Graceful degradation: Missing tiers fall back to available ones
- Durable never used if absent: No accidental Durable references
- Local never authoritative: Source of truth is always the highest configured tier
Postgres Schema
The Postgres store requires a table to be created via migrations. The store never creates or modifies schema at runtime.
Migration
Create a new migration file (e.g., migrations/YYYYMMDDHHMMSS_create_memento_cache.sql):
-- Memento cache table for durable caching
-- This table stores cache entries that need to persist across restarts
(
key TEXT PRIMARY KEY,
value BYTEA,
is_none BOOLEAN NOT NULL,
expires_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW,
CHECK (
(is_none = TRUE AND value IS NULL)
OR (is_none = FALSE AND value IS NOT NULL)
)
);
-- Index for efficient cleanup of expired entries
-- Partial index excludes permanent entries (expires_at IS NULL)
ON public.memento_cache (expires_at)
WHERE expires_at IS NOT NULL;
-- Documentation
COMMENT ON TABLE public.memento_cache IS
'Memento durable cache storage for expensive computations';
COMMENT ON COLUMN public.memento_cache.key IS
'Cache key (colon-separated segments)';
COMMENT ON COLUMN public.memento_cache.value IS
'Serialized cache value (bincode), NULL if negative cache entry';
COMMENT ON COLUMN public.memento_cache.is_none IS
'True if this is a negative cache entry (None result)';
COMMENT ON COLUMN public.memento_cache.expires_at IS
'When this entry expires (NULL = never)';
COMMENT ON COLUMN public.memento_cache.created_at IS
'When this entry was created';
Run the migration using your preferred tool (sqlx, diesel, etc.):
# Using sqlx
# Using psql directly
Custom Table Name
If you need a different table name or schema:
-- Custom schema and table name
(
-- same columns as above
);
ON myapp.cache_entries (expires_at)
WHERE expires_at IS NOT NULL;
Then configure the builder:
let pg_store = new
.schema
.table
.build
.await?;
Schema Design
| Column | Type | Purpose |
|---|---|---|
key |
TEXT PRIMARY KEY |
Cache key (colon-separated segments) |
value |
BYTEA |
Bincode-serialized value, NULL for negative cache |
is_none |
BOOLEAN NOT NULL |
True if this is a negative cache entry |
expires_at |
TIMESTAMPTZ |
Expiration time, NULL = never expires |
created_at |
TIMESTAMPTZ NOT NULL |
When entry was created |
Constraints:
CHECKconstraint enforces:is_none = TRUE ⟺ value IS NULL- Partial index on
expires_atexcludes permanent entries for efficient cleanup
Maintenance
Expired entries are filtered on read, but you may want periodic cleanup:
-- Manual cleanup (run periodically via cron or pg_cron)
DELETE FROM public.memento_cache
WHERE expires_at IS NOT NULL AND expires_at < NOW;
Or use the built-in cleanup method:
// Returns number of entries removed
let removed = pg_store.cleanup_expired.await?;
Legacy API
⚠️ Deprecated: The legacy strategy-based API is maintained for backwards compatibility only. New applications should use the tiered API shown above.
use ;
// Memory only (old API)
let cache = default;
// Redis (old API)
let cache = new;
// Tiered (old API)
let cache = new;
License
MIT