Skip to main content

Module auto_parallel

Module auto_parallel 

Source
Expand description

Automatic parallelization for computation graphs.

This module provides automatic detection and exploitation of parallelism opportunities:

  • Dependency analysis: Build dependency graphs and detect parallelizable operations
  • Cost modeling: Estimate execution costs and communication overhead
  • Work partitioning: Dynamically partition work across threads/devices
  • Load balancing: Balance work to minimize idle time
  • Pipeline detection: Identify pipeline parallelism opportunities

§Example

use tensorlogic_infer::{AutoParallelizer, ParallelizationStrategy, CostModel};

// Create auto-parallelizer with cost model
let parallelizer = AutoParallelizer::new()
    .with_strategy(ParallelizationStrategy::Aggressive)
    .with_cost_model(CostModel::ProfileBased);

// Analyze graph for parallelism
let analysis = parallelizer.analyze(&graph)?;
println!("Found {} parallelizable stages", analysis.num_stages);

// Generate parallel execution plan
let plan = parallelizer.generate_plan(&graph)?;

Structs§

AutoParallelizer
Automatic parallelizer.
NodeInfo
Node information for parallelization analysis.
ParallelExecutionPlan
Parallel execution plan.
ParallelStage
Parallel stage containing nodes that can execute concurrently.
ParallelizationAnalysis
Parallelization analysis results.
WorkPartition
Work partition for a single worker.

Enums§

AutoParallelError
Auto-parallelization errors.
CostModel
Cost model type.
DependencyType
Dependency type between nodes.
ParallelizationStrategy
Parallelization strategy.

Type Aliases§

NodeId
Node ID in the computation graph.