Expand description
§Caching Module
This module provides caching mechanisms for fitness evaluations in genetic algorithms. Caching can significantly improve performance by avoiding redundant calculations, especially when fitness evaluations are computationally expensive.
§Overview
In genetic algorithms, the same phenotype may be evaluated multiple times during evolution. By caching fitness scores, we can avoid recalculating values we’ve already computed. This module provides two main caching implementations:
CachedChallenge
: A thread-safe cache using a mutexThreadLocalCachedChallenge
: A thread-local cache for better parallel performance
§Usage
To use caching, you need to:
- Implement the
CacheKey
trait for your phenotype - Wrap your challenge in one of the cache decorators
use genalg::caching::{CacheKey, CachedChallenge};
use genalg::evolution::Challenge;
use genalg::phenotype::Phenotype;
use genalg::rng::RandomNumberGenerator;
use std::fmt::Debug;
#[derive(Clone, Debug)]
struct MyPhenotype {
values: Vec<i32>,
}
impl Phenotype for MyPhenotype {
fn crossover(&mut self, other: &Self) {
// Implementation omitted for brevity
}
fn mutate(&mut self, _rng: &mut RandomNumberGenerator) {
// Implementation omitted for brevity
}
}
// Implement CacheKey to enable caching
impl CacheKey for MyPhenotype {
// Use a type that uniquely identifies phenotypes with the same fitness
type Key = Vec<i32>;
fn cache_key(&self) -> Self::Key {
self.values.clone()
}
}
// Your challenge implementation
struct MyChallenge;
impl Challenge<MyPhenotype> for MyChallenge {
fn score(&self, phenotype: &MyPhenotype) -> f64 {
// Expensive computation...
phenotype.values.iter().sum::<i32>() as f64
}
}
// Create a cached version of your challenge
let challenge = MyChallenge;
let cached_challenge = CachedChallenge::new(challenge);
// Use the cached challenge just like the original
let phenotype = MyPhenotype { values: vec![1, 2, 3] };
let score = cached_challenge.score(&phenotype); // Computed and cached
let score_again = cached_challenge.score(&phenotype); // Retrieved from cache
§Choosing a Cache Implementation
CachedChallenge
: Good for single-threaded or low-contention scenariosThreadLocalCachedChallenge
: Better for highly parallel workloads
§Performance Considerations
- Cache lookups are generally much faster than fitness evaluations for complex problems
- The cache grows unbounded, so consider calling
clear_cache()
periodically for long-running evolutions - Thread contention on the mutex can become a bottleneck in
CachedChallenge
with many threads ThreadLocalCachedChallenge
avoids mutex contention but may use more memory with many threads
§Memory Usage
Both cache implementations store fitness values indefinitely until clear_cache()
is called.
For long-running evolutions or large populations, monitor memory usage and clear the cache
when appropriate.
Structs§
- Cached
Challenge - A challenge wrapper that caches fitness evaluations.
- Thread
Local Cache - A thread-local cache for storing key-value pairs.
- Thread
Local Cached Challenge - A challenge wrapper that caches fitness evaluations using thread-local storage.
Traits§
- Cache
Key - A trait for phenotypes that can be cached.