Expand description
Semantic Cache for cortex
Provides caching with optional semantic similarity matching for LLM responses. Supports multiple backends:
- In-Memory: Fast LRU cache with TTL support (default)
- Redis/Valkey: Distributed cache for multi-instance deployments
§Features
memory(default): Enable in-memory LRU cacheredis: Enable Redis/Valkey backendfull: Enable all backends
§Example
ⓘ
use cortexai_cache::{Cache, MemoryCache, CacheConfig};
// In-memory cache
let cache = MemoryCache::new(CacheConfig::default());
// Store a response
cache.store("What is Rust?", "context", "Rust is a systems programming language", vec![]).await?;
// Retrieve (exact match)
if let Some(entry) = cache.get("What is Rust?", "context").await? {
println!("Cached response: {}", entry.response);
}Structs§
- Cache
Builder - Builder for creating caches with fallback
- Cache
Config - Configuration for cache behavior
- Cache
Entry - A cached entry
- Cache
Stats - Statistics for cache operations
- Memory
Cache - In-memory LRU cache with TTL support
- Semantic
Memory Cache - Memory cache with semantic similarity support
Enums§
- Cache
Error - Errors that can occur during cache operations
- Optional
Cache - Optional cache wrapper for graceful degradation
Traits§
- Cache
- Trait for cache implementations
- Semantic
Cache - Trait for semantic cache with similarity matching