tenrso-planner
Production-grade contraction order planning and optimization for tensor networks.
Part of the TenRSo tensor computing stack.
Overview
tenrso-planner implements intelligent planning for tensor network contractions, particularly those expressed in Einstein summation notation. It provides state-of-the-art algorithms for finding efficient execution plans that minimize computational cost and memory usage.
Features
✅ 6 Planning Algorithms
- Greedy heuristic (O(n³))
- Beam search with configurable width
- Dynamic programming (optimal for n ≤ 20)
- Simulated annealing
- Genetic algorithm (evolutionary search)
- Adaptive planner (auto-selects best algorithm)
✅ Cost Modeling
- FLOPs estimation for dense and sparse operations
- Memory usage prediction
- Sparsity-aware cost models
✅ Representation Selection
- Automatic dense/sparse/low-rank format selection
- Conversion cost estimation
✅ Cache-Aware Tiling
- L1/L2/L3 cache optimization
- Multi-level blocking strategies
- Memory budget constraints
✅ Production Ready
- 144 comprehensive tests (136 unit + 8 doc)
- 14 benchmark suites
- 4 detailed examples
- Full rustdoc coverage
- Zero warnings compilation
- SciRS2-Core integration for professional-grade RNG
Installation
[]
= "0.1.0-alpha.1"
Optional Features
[]
= { = "0.1.0-alpha.1", = ["serde"] }
serde: Enable serialization/deserialization of plans
Quick Start
use ;
// Parse Einstein summation notation
let spec = parse?;
// Define tensor shapes
let shapes = vec!;
// Create a plan with default hints
let hints = default;
let plan = greedy_planner?;
// Inspect plan details
println!;
println!;
println!;
Planning Algorithms
1. Adaptive Planner (⭐ Recommended)
Automatically selects the best algorithm based on problem characteristics:
use ;
let planner = new;
let hints = default;
let plan = planner.make_plan?;
Algorithm Selection:
- Small networks (n ≤ 5): DP (optimal)
- Medium networks (5 < n ≤ 20): Beam Search or DP
- Large networks (n > 20): Greedy, SA, or GA based on time budget
2. Greedy Planner
Fast heuristic that greedily selects the lowest-cost contraction at each step:
use ;
let planner = new;
let plan = planner.make_plan?;
When to use: Fast planning required, many tensors (> 10) Complexity: O(n³) time, O(n) space Quality: Good (within 2-5x of optimal)
3. Beam Search Planner
Explores multiple contraction paths simultaneously:
use ;
let planner = with_beam_width;
let plan = planner.make_plan?;
When to use: Medium networks (8-20 tensors), better quality than greedy Complexity: O(n³ × k) where k is beam width Quality: Better (often within 1.5-2x of optimal)
4. Dynamic Programming Planner
Finds globally optimal contraction order:
use ;
let planner = new;
let plan = planner.make_plan?;
When to use: Small networks (≤ 20 tensors), need provably optimal plan Complexity: O(3ⁿ) time, O(2ⁿ) space Quality: Optimal (guarantees minimum FLOPs)
5. Simulated Annealing Planner
Stochastic search with temperature-based acceptance:
use ;
let planner = with_params;
let plan = planner.make_plan?;
When to use: Large networks (> 20 tensors), quality more important than speed Complexity: O(iterations × n) time Quality: Variable (can escape local minima)
6. Genetic Algorithm Planner
Evolutionary search with population-based exploration:
use ;
// Use default configuration
let planner = new;
// Or use presets
let planner = fast; // Quick experimentation
let planner = high_quality; // Production quality
let plan = planner.make_plan?;
When to use: Very large networks (> 20 tensors), complex topologies Complexity: O(generations × population × n³) Quality: High (can find near-optimal solutions for complex problems)
Planner Comparison
| Algorithm | Time | Space | Quality | Best For |
|---|---|---|---|---|
| Greedy | O(n³) | O(n) | Good | General purpose, fast |
| Beam Search | O(n³k) | O(kn) | Better | Medium networks |
| DP | O(3ⁿ) | O(2ⁿ) | Optimal | n ≤ 20 |
| Simulated Annealing | O(iter·n) | O(n) | Variable | Large, quality focus |
| Genetic Algorithm | O(gen·pop·n³) | O(pop·n) | High | Very large, complex |
| Adaptive | Varies | Varies | Auto | All cases ⭐ |
Examples
See the examples/ directory for detailed usage:
basic_matmul.rs- Simple matrix multiplicationmatrix_chain.rs- Greedy vs DP comparisonplanning_hints.rs- Advanced hint usagegenetic_algorithm.rs- GA planner showcase
Run examples:
Benchmarks
Comprehensive benchmark suite covering all planners:
# Run all benchmarks
# Run specific benchmark
Benchmark Results (typical):
- Greedy: ~1ms for 10 tensors
- Beam Search (k=5): ~5ms for 10 tensors
- DP: ~1ms for 5 tensors, ~100ms for 10 tensors
- SA: ~10-100ms (configurable)
- GA: ~50-500ms (depends on population/generations)
Advanced Features
Plan Comparison and Analysis
use ;
let greedy = greedy_planner?;
let optimal = dp_planner?;
// Compare plans
let comparison = greedy.compare;
println!;
// Analyze plan
let analysis = greedy.analyze;
println!;
Plan Refinement (Local Search)
use refine_plan;
let plan = greedy_planner?;
let refined = refine_plan?;
Serialization (with serde feature)
use Plan;
let plan = greedy_planner?;
// Serialize to JSON
let json = to_string?;
// Deserialize
let loaded: Plan = from_str?;
Performance
Planning overhead is typically negligible compared to execution:
Network Size | Greedy | Beam(k=5) | DP | SA | GA
-------------|--------|-----------|-------|--------|--------
3 tensors | 100μs | 500μs | 1ms | 10ms | 50ms
5 tensors | 200μs | 1ms | 10ms | 20ms | 100ms
10 tensors | 1ms | 5ms | 1s | 50ms | 200ms
20 tensors | 10ms | 50ms | N/A | 100ms | 500ms
Testing
# Run all tests
# Run with output
# Run specific test
Test Coverage: 144 tests (136 unit + 8 doc), 100% passing
Documentation
Build and view documentation:
Contributing
Contributions welcome! See CONTRIBUTING.md for guidelines.
References
- Matrix chain multiplication: Cormen et al., Introduction to Algorithms
- Tensor network contraction: Pfeifer et al., Faster identification of optimal contraction sequences (2014)
- Einsum optimization:
opt_einsumlibrary (Python) - Genetic algorithms: Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning (1989)
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.
Related Projects
tenrso-core- Core tensor data structurestenrso-kernels- High-performance tensor kernelstenrso-exec- Plan execution enginetenrso- Main TenRSo library
Status: ✅ Production-ready • Version: 0.1.0-alpha.1 • Tests: 144/144 passing