Expand description
Memory hierarchy and cache-aware optimization.
This module provides cache-aware optimizations for better memory performance:
- Cache modeling: Model L1/L2/L3 cache behavior
- Data layout optimization: Arrange data for cache efficiency
- Loop tiling: Optimize loop nests for cache reuse
- Prefetching: Software prefetch directives
- NUMA optimization: Optimize for non-uniform memory access
§Example
ⓘ
use tensorlogic_infer::{CacheOptimizer, CacheConfig, TilingStrategy};
// Configure cache optimizer
let config = CacheConfig::from_system()
.with_tiling_enabled(true)
.with_prefetch_distance(8);
let optimizer = CacheOptimizer::new(config);
// Optimize graph for cache efficiency
let optimized = optimizer.optimize(&graph)?;
// Check cache metrics
let metrics = optimizer.estimate_cache_metrics(&optimized);
println!("Estimated cache hit rate: {:.2}%", metrics.hit_rate * 100.0);Structs§
- Cache
Config - Cache configuration.
- Cache
Metrics - Cache metrics for a computation.
- Cache
Optimizer - Cache-aware optimizer.
- Optimization
Stats - Optimization statistics.
- Tiling
Params - Loop tiling parameters.
Enums§
- Access
Pattern - Memory access pattern.
- Cache
Level - Cache level.
- Cache
Optimizer Error - Cache optimization errors.
- Data
Layout - Data layout strategy.