Expand description
Automatic parallelization for computation graphs.
This module provides automatic detection and exploitation of parallelism opportunities:
- Dependency analysis: Build dependency graphs and detect parallelizable operations
- Cost modeling: Estimate execution costs and communication overhead
- Work partitioning: Dynamically partition work across threads/devices
- Load balancing: Balance work to minimize idle time
- Pipeline detection: Identify pipeline parallelism opportunities
§Example
ⓘ
use tensorlogic_infer::{AutoParallelizer, ParallelizationStrategy, CostModel};
// Create auto-parallelizer with cost model
let parallelizer = AutoParallelizer::new()
.with_strategy(ParallelizationStrategy::Aggressive)
.with_cost_model(CostModel::ProfileBased);
// Analyze graph for parallelism
let analysis = parallelizer.analyze(&graph)?;
println!("Found {} parallelizable stages", analysis.num_stages);
// Generate parallel execution plan
let plan = parallelizer.generate_plan(&graph)?;Structs§
- Auto
Parallelizer - Automatic parallelizer.
- Node
Info - Node information for parallelization analysis.
- Parallel
Execution Plan - Parallel execution plan.
- Parallel
Stage - Parallel stage containing nodes that can execute concurrently.
- Parallelization
Analysis - Parallelization analysis results.
- Work
Partition - Work partition for a single worker.
Enums§
- Auto
Parallel Error - Auto-parallelization errors.
- Cost
Model - Cost model type.
- Dependency
Type - Dependency type between nodes.
- Parallelization
Strategy - Parallelization strategy.
Type Aliases§
- NodeId
- Node ID in the computation graph.