cache_rs/lib.rs
1//! # Cache
2//!
3//! A collection of high-performance, memory-efficient cache implementations supporting various eviction policies.
4//!
5//! This crate provides cache implementations optimized for performance and memory usage that can be used
6//! in both std and no_std environments. All cache operations (`get`, `get_mut`, `put`, and `remove`)
7//! have O(1) time complexity.
8//!
9//! ## Available Cache Algorithms
10//!
11//! | Algorithm | Description | Best Use Case |
12//! |-----------|-------------|---------------|
13//! | [`LruCache`] | Least Recently Used | General purpose, recency-based access patterns |
14//! | [`SlruCache`] | Segmented LRU | Mixed access patterns with both hot and cold items |
15//! | [`LfuCache`] | Least Frequently Used | Frequency-based access patterns |
16//! | [`LfudaCache`] | LFU with Dynamic Aging | Long-running caches with changing popularity |
17//! | [`GdsfCache`] | Greedy Dual Size Frequency | CDNs and size-aware caching |
18//!
19//! ## Performance Characteristics
20//!
21//! | Algorithm | Space Overhead | Hit Rate for Recency | Hit Rate for Frequency | Scan Resistance |
22//! |-----------|---------------|----------------------|------------------------|----------------|
23//! | LRU | Low | High | Low | Poor |
24//! | SLRU | Medium | High | Medium | Good |
25//! | LFU | Medium | Low | High | Excellent |
26//! | LFUDA | Medium | Medium | High | Excellent |
27//! | GDSF | High | Medium | High | Good |
28//!
29//! ## When to Use Each Algorithm
30//!
31//! - **LRU**: Use for general-purpose caching where recent items are likely to be accessed again.
32//! - **SLRU**: Use when you have a mix of frequently and occasionally accessed items.
33//! - **LFU**: Use when access frequency is more important than recency.
34//! - **LFUDA**: Use for long-running caches where item popularity changes over time.
35//! - **GDSF**: Use when items have different sizes and you want to optimize for both size and popularity.
36//!
37//! ## Feature Flags
38//!
39//! - `hashbrown`: Uses the [hashbrown](https://crates.io/crates/hashbrown) crate for HashMap implementation (enabled by default)
40//! - `nightly`: Enables nightly-only optimizations for improved performance
41//! - `std`: Enables standard library features (disabled by default to support `no_std`)
42//! - `concurrent`: Enables thread-safe concurrent cache implementations using `parking_lot`
43//!
44//! ## No-std Support
45//!
46//! This crate works in `no_std` environments by default. Enable the `std` feature for additional functionality.
47//!
48//! ### Example with no_std
49//!
50//! ```rust
51//! use cache_rs::LruCache;
52//! use core::num::NonZeroUsize;
53//!
54//! // Create a cache in a no_std environment
55//! let mut cache = LruCache::new(NonZeroUsize::new(100).unwrap());
56//! cache.put("key", "value");
57//! assert_eq!(cache.get(&"key"), Some(&"value"));
58//! ```
59//!
60//! ## Modules
61//!
62//! - [`lru`]: A Least Recently Used (LRU) cache implementation
63//! - [`slru`]: A Segmented LRU (SLRU) cache implementation
64//! - [`lfu`]: A Least Frequently Used (LFU) cache implementation
65//! - [`lfuda`]: An LFU with Dynamic Aging (LFUDA) cache implementation
66//! - [`gdsf`]: A Greedy Dual Size Frequency (GDSF) cache implementation
67//! - [`config`]: Configuration structures for all cache algorithm implementations
68//! - [`metrics`]: Metrics collection for cache performance monitoring
69//! - [`concurrent`]: Thread-safe concurrent cache implementations (requires `concurrent` feature)
70
71#![no_std]
72
73#[cfg(test)]
74extern crate scoped_threadpool;
75
76/// Doubly linked list implementation with in-place editing capabilities.
77///
78/// This module provides a memory-efficient doubly linked list that allows for
79/// efficient insertion, removal, and reordering operations.
80///
81/// **Note**: This module is internal infrastructure and should not be used directly
82/// by library consumers. It exposes unsafe raw pointer operations that require
83/// careful invariant maintenance. Use the high-level cache implementations instead.
84pub(crate) mod list;
85
86/// Cache configuration structures.
87///
88/// Provides configuration structures for all cache algorithm implementations.
89pub mod config;
90
91/// Least Recently Used (LRU) cache implementation.
92///
93/// Provides a fixed-size cache that evicts the least recently used items when
94/// the capacity is reached.
95pub mod lru;
96
97/// Segmented LRU (SLRU) cache implementation.
98///
99/// Provides a fixed-size cache that uses a segmented approach to evict
100/// items based on their usage patterns. This is useful for scenarios where
101/// certain items are accessed more frequently than others.
102pub mod slru;
103
104/// Least Frequently Used (LFU) cache implementation.
105///
106/// Provides a fixed-size cache that evicts the least frequently used items
107/// when capacity is reached. Items are tracked by their access frequency.
108pub mod lfu;
109
110/// Least Frequently Used with Dynamic Aging (LFUDA) cache implementation.
111///
112/// An enhanced LFU cache that addresses the aging problem by dynamically
113/// adjusting item priorities based on a global aging factor.
114pub mod lfuda;
115
116/// Greedy Dual-Size Frequency (GDSF) cache implementation.
117///
118/// A cache replacement algorithm that combines frequency, size, and aging.
119/// Assigns priority based on (Frequency / Size) + Global_Age formula.
120pub mod gdsf;
121
122/// Cache metrics system.
123///
124/// Provides a flexible metrics collection and reporting system for all cache algorithms.
125/// Each algorithm can track algorithm-specific metrics while implementing a common interface.
126pub mod metrics;
127
128/// Concurrent cache implementations.
129///
130/// Provides thread-safe cache implementations using segmented storage for high-performance
131/// multi-threaded access. Each concurrent cache partitions the key space across multiple
132/// segments, with each segment protected by its own lock.
133///
134/// Available when the `concurrent` feature is enabled.
135#[cfg(feature = "concurrent")]
136pub mod concurrent;
137
138pub use gdsf::GdsfCache;
139pub use lfu::LfuCache;
140pub use lfuda::LfudaCache;
141pub use lru::LruCache;
142pub use slru::SlruCache;
143
144#[cfg(feature = "concurrent")]
145pub use concurrent::{
146 ConcurrentGdsfCache, ConcurrentLfuCache, ConcurrentLfudaCache, ConcurrentLruCache,
147 ConcurrentSlruCache,
148};