Module cache

Module cache 

Source
Expand description

Result caching with LRU eviction and TTL

§Design Philosophy

This cache implementation follows enterprise-grade patterns:

  • Thread-Safe: Uses Arc + RwLock for concurrent access
  • LRU Eviction: Least Recently Used items are evicted first
  • TTL Support: Entries expire after configured time-to-live
  • Statistics: Tracks hits, misses, and hit rates
  • Lazy Cleanup: Expired items cleaned on access (no background threads)

§Usage Example

use llm_shield_models::cache::{ResultCache, CacheConfig};
use llm_shield_core::ScanResult;
use std::time::Duration;

let cache = ResultCache::new(CacheConfig {
    max_size: 1000,
    ttl: Duration::from_secs(300),
});

// Insert a result
let result = ScanResult::pass("safe text".to_string());
cache.insert("key1".to_string(), result);

// Retrieve it
if let Some(cached_result) = cache.get("key1") {
    println!("Cache hit!");
}

// Check statistics
let stats = cache.stats();
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);

Structs§

CacheConfig
Configuration for the result cache
CacheStats
Cache performance statistics
ResultCache
Thread-safe result cache with LRU eviction and TTL