cache_rs/lib.rs
1//! # Cache
2//!
3//! A collection of high-performance, memory-efficient cache implementations supporting various eviction policies.
4//!
5//! This crate provides cache implementations optimized for performance and memory usage that can be used
6//! in both std and no_std environments. All cache operations (`get`, `get_mut`, `put`, and `remove`)
7//! have O(1) time complexity.
8//!
9//! ## Available Cache Algorithms
10//!
11//! | Algorithm | Description | Best Use Case |
12//! |-----------|-------------|---------------|
13//! | [`LruCache`] | Least Recently Used | General purpose, recency-based access patterns |
14//! | [`SlruCache`] | Segmented LRU | Mixed access patterns with both hot and cold items |
15//! | [`LfuCache`] | Least Frequently Used | Frequency-based access patterns |
16//! | [`LfudaCache`] | LFU with Dynamic Aging | Long-running caches with changing popularity |
17//! | [`GdsfCache`] | Greedy Dual Size Frequency | CDNs and size-aware caching |
18//!
19//! ## Performance Characteristics
20//!
21//! | Algorithm | Space Overhead | Hit Rate for Recency | Hit Rate for Frequency | Scan Resistance |
22//! |-----------|---------------|----------------------|------------------------|----------------|
23//! | LRU | Low | High | Low | Poor |
24//! | SLRU | Medium | High | Medium | Good |
25//! | LFU | Medium | Low | High | Excellent |
26//! | LFUDA | Medium | Medium | High | Excellent |
27//! | GDSF | High | Medium | High | Good |
28//!
29//! ## When to Use Each Algorithm
30//!
31//! - **LRU**: Use for general-purpose caching where recent items are likely to be accessed again.
32//! - **SLRU**: Use when you have a mix of frequently and occasionally accessed items.
33//! - **LFU**: Use when access frequency is more important than recency.
34//! - **LFUDA**: Use for long-running caches where item popularity changes over time.
35//! - **GDSF**: Use when items have different sizes and you want to optimize for both size and popularity.
36//!
37//! ## Feature Flags
38//!
39//! - `hashbrown`: Uses the [hashbrown](https://crates.io/crates/hashbrown) crate for HashMap implementation (enabled by default)
40//! - `nightly`: Enables nightly-only optimizations for improved performance
41//! - `std`: Enables standard library features (disabled by default to support `no_std`)
42//!
43//! ## No-std Support
44//!
45//! This crate works in `no_std` environments by default. Enable the `std` feature for additional functionality.
46//!
47//! ### Example with no_std
48//!
49//! ```rust
50//! use cache_rs::LruCache;
51//! use core::num::NonZeroUsize;
52//!
53//! // Create a cache in a no_std environment
54//! let mut cache = LruCache::new(NonZeroUsize::new(100).unwrap());
55//! cache.put("key", "value");
56//! assert_eq!(cache.get(&"key"), Some(&"value"));
57//! ```
58//!
59//! ## Modules
60//!
61//! - [`lru`]: A Least Recently Used (LRU) cache implementation
62//! - [`slru`]: A Segmented LRU (SLRU) cache implementation
63//! - [`lfu`]: A Least Frequently Used (LFU) cache implementation
64//! - [`lfuda`]: An LFU with Dynamic Aging (LFUDA) cache implementation
65//! - [`gdsf`]: A Greedy Dual Size Frequency (GDSF) cache implementation
66//! - [`config`]: Configuration structures for all cache algorithm implementations
67//! - [`metrics`]: Metrics collection for cache performance monitoring
68
69#![no_std]
70
71#[cfg(test)]
72extern crate scoped_threadpool;
73
74/// Doubly linked list implementation with in-place editing capabilities.
75///
76/// This module provides a memory-efficient doubly linked list that allows for
77/// efficient insertion, removal, and reordering operations.
78///
79/// **Note**: This module is internal infrastructure and should not be used directly
80/// by library consumers. It exposes unsafe raw pointer operations that require
81/// careful invariant maintenance. Use the high-level cache implementations instead.
82pub(crate) mod list;
83
84/// Cache configuration structures.
85///
86/// Provides configuration structures for all cache algorithm implementations.
87pub mod config;
88
89/// Least Recently Used (LRU) cache implementation.
90///
91/// Provides a fixed-size cache that evicts the least recently used items when
92/// the capacity is reached.
93pub mod lru;
94
95/// Segmented LRU (SLRU) cache implementation.
96///
97/// Provides a fixed-size cache that uses a segmented approach to evict
98/// items based on their usage patterns. This is useful for scenarios where
99/// certain items are accessed more frequently than others.
100pub mod slru;
101
102/// Least Frequently Used (LFU) cache implementation.
103///
104/// Provides a fixed-size cache that evicts the least frequently used items
105/// when capacity is reached. Items are tracked by their access frequency.
106pub mod lfu;
107
108/// Least Frequently Used with Dynamic Aging (LFUDA) cache implementation.
109///
110/// An enhanced LFU cache that addresses the aging problem by dynamically
111/// adjusting item priorities based on a global aging factor.
112pub mod lfuda;
113
114/// Greedy Dual-Size Frequency (GDSF) cache implementation.
115///
116/// A cache replacement algorithm that combines frequency, size, and aging.
117/// Assigns priority based on (Frequency / Size) + Global_Age formula.
118pub mod gdsf;
119
120/// Cache metrics system.
121///
122/// Provides a flexible metrics collection and reporting system for all cache algorithms.
123/// Each algorithm can track algorithm-specific metrics while implementing a common interface.
124pub mod metrics;
125
126pub use gdsf::GdsfCache;
127pub use lfu::LfuCache;
128pub use lfuda::LfudaCache;
129pub use lru::LruCache;
130pub use slru::SlruCache;