Expand description
Cache Management Module
This module provides comprehensive caching utilities for workflow execution, including cache key generation, TTL management, invalidation strategies, and cache warming capabilities.
§Features
- Cache Key Generation: Deterministic key generation for LLM prompts, code, and retrieval
- TTL Management: Time-based expiration with customizable policies
- Invalidation Strategies: Pattern-based and dependency-based invalidation
- Cache Warming: Preload frequently used results
- Cache Statistics: Track hit rates, misses, and performance metrics
§Example
use oxify_model::cache::{CacheKeyGenerator, CacheConfig, CachePolicy};
use std::time::Duration;
// Generate cache key for LLM prompt
let key = CacheKeyGenerator::llm_prompt_key(
"gpt-4",
"Summarize this text",
&[("temperature", "0.7")],
);
// Configure cache policy
let config = CacheConfig::default()
.with_ttl(Duration::from_secs(3600))
.with_max_size(1000);Structs§
- Cache
Config - Cache configuration
- Cache
Entry - Cache entry metadata
- Cache
KeyGenerator - Cache key generator utilities
- Cache
Manager - Cache manager for workflow caching
- Cache
Stats - Cache statistics
- Cache
Warming Config - Cache warming configuration
- Invalidation
Plan - Cache invalidation plan
Enums§
- Cache
Policy - Cache policy defining caching behavior
- Invalidation
Strategy - Cache invalidation strategy
- Warming
Strategy - Cache warming strategy