Skip to main content

Crate cortexai_cache

Crate cortexai_cache 

Source
Expand description

Semantic Cache for cortex

Provides caching with optional semantic similarity matching for LLM responses. Supports multiple backends:

  • In-Memory: Fast LRU cache with TTL support (default)
  • Redis/Valkey: Distributed cache for multi-instance deployments

§Features

  • memory (default): Enable in-memory LRU cache
  • redis: Enable Redis/Valkey backend
  • full: Enable all backends

§Example

use cortexai_cache::{Cache, MemoryCache, CacheConfig};

// In-memory cache
let cache = MemoryCache::new(CacheConfig::default());

// Store a response
cache.store("What is Rust?", "context", "Rust is a systems programming language", vec![]).await?;

// Retrieve (exact match)
if let Some(entry) = cache.get("What is Rust?", "context").await? {
    println!("Cached response: {}", entry.response);
}

Structs§

CacheBuilder
Builder for creating caches with fallback
CacheConfig
Configuration for cache behavior
CacheEntry
A cached entry
CacheStats
Statistics for cache operations
MemoryCache
In-memory LRU cache with TTL support
SemanticMemoryCache
Memory cache with semantic similarity support

Enums§

CacheError
Errors that can occur during cache operations
OptionalCache
Optional cache wrapper for graceful degradation

Traits§

Cache
Trait for cache implementations
SemanticCache
Trait for semantic cache with similarity matching