Skip to main content

Crate omeco

Crate omeco 

Source
Expand description

§omeco - Tensor Network Contraction Order Optimization

A Rust library for optimizing tensor network contraction orders, ported from the Julia package OMEinsumContractionOrders.jl.

§What is a Tensor Network?

A tensor network represents multilinear transformations as hypergraphs. Arrays (tensors) are nodes, and shared indices are hyperedges connecting them. To contract a tensor network means evaluating the transformation by performing a sequence of pairwise tensor operations.

The computational cost—both time and memory—depends critically on the order of these operations. A specific ordering is called a contraction order, and finding an efficient one is contraction order optimization.

This framework appears across many domains: einsum notation in numerical computing, factor graphs in probabilistic inference, and junction trees in graphical models. Applications include quantum circuit simulation, quantum error correction, neural network compression, and combinatorial optimization.

Finding the optimal contraction order is NP-complete, but good heuristics can find near-optimal solutions quickly.

§Features

This crate provides two main features:

  1. Contraction Order Optimization — Find efficient orderings that minimize time and/or space complexity
  2. Slicing — Trade time for space by looping over selected indices

§Feature 1: Contraction Order Optimization

A contraction order is represented as a binary tree where leaves are input tensors and internal nodes are intermediate results. The optimizer searches for trees that minimize a cost function balancing multiple objectives:

  • Time complexity (tc): Total FLOP count (log2 scale)
  • Space complexity (sc): Maximum intermediate tensor size (log2 scale)
  • Read-write complexity (rwc): Total memory I/O (log2 scale)
use omeco::{EinCode, GreedyMethod, contraction_complexity, optimize_code, uniform_size_dict};

// Matrix chain: A[i,j] × B[j,k] × C[k,l] → D[i,l]
let code = EinCode::new(
    vec![vec!['i', 'j'], vec!['j', 'k'], vec!['k', 'l']],
    vec!['i', 'l'],
);
let sizes = uniform_size_dict(&code, 16);

let optimized = optimize_code(&code, &sizes, &GreedyMethod::default())
    .expect("optimizer failed");
let metrics = contraction_complexity(&optimized, &sizes, &code.ixs);
println!("time: 2^{:.2}", metrics.tc);
println!("space: 2^{:.2}", metrics.sc);

Available optimizers:

OptimizerDescription
GreedyMethodFast O(n² log n) greedy heuristic
TreeSASimulated annealing for higher-quality solutions

Use GreedyMethod when you need speed; use TreeSA when contraction cost dominates and you can afford extra search time.

§Feature 2: Slicing

Slicing trades time complexity for reduced space complexity by explicitly looping over a subset of tensor indices. This is useful when the optimal contraction order still exceeds available memory.

For example, slicing index j with dimension 64 means running 64 smaller contractions and summing the results, reducing peak memory at the cost of more total work.

use omeco::{EinCode, GreedyMethod, SlicedEinsum, optimize_code, sliced_complexity, uniform_size_dict};

let code = EinCode::new(
    vec![vec!['i', 'j'], vec!['j', 'k']],
    vec!['i', 'k'],
);
let sizes = uniform_size_dict(&code, 64);

let optimized = optimize_code(&code, &sizes, &GreedyMethod::default())
    .expect("optimizer failed");

// Slice over index 'j' to reduce memory
let sliced = SlicedEinsum::new(vec!['j'], optimized);
let metrics = sliced_complexity(&sliced, &sizes, &code.ixs);
println!("sliced space: 2^{:.2}", metrics.sc);

§Algorithm Details

§GreedyMethod

Repeatedly contracts the tensor pair with the lowest cost:

loss = size(output) - α × (size(input1) + size(input2))
  • alpha (0.0–1.0): Balances output size vs input size reduction
  • temperature: Enables stochastic selection via Boltzmann sampling (0 = deterministic)

§TreeSA

Simulated annealing on contraction trees. Starts from an initial tree, applies local rewrites, and accepts/rejects via Metropolis criterion. Runs multiple trials in parallel using rayon.

The scoring function balances objectives:

score = w_t × 2^tc + w_rw × 2^rwc + w_s × max(0, 2^sc - 2^sc_target)
  • betas: Inverse temperature schedule
  • ntrials: Parallel trials (control threads via RAYON_NUM_THREADS)
  • niters: Iterations per temperature level
  • score: ScoreFunction with weights and space target

Re-exports§

pub use complexity::eincode_complexity;
pub use complexity::flop;
pub use complexity::nested_complexity;
pub use complexity::nested_flop;
pub use complexity::peak_memory;
pub use complexity::sliced_complexity;
pub use complexity::ContractionComplexity;
pub use eincode::log2_size_dict;
pub use eincode::uniform_size_dict;
pub use eincode::EinCode;
pub use eincode::NestedEinsum;
pub use eincode::SlicedEinsum;
pub use greedy::optimize_greedy;
pub use greedy::ContractionTree;
pub use greedy::GreedyMethod;
pub use greedy::GreedyResult;
pub use label::Label;
pub use score::ScoreFunction;
pub use slicer::slice_code;
pub use slicer::CodeSlicer;
pub use slicer::Slicer;
pub use slicer::TreeSASlicer;
pub use treesa::optimize_treesa;
pub use treesa::Initializer;
pub use treesa::TreeSA;

Modules§

complexity
Complexity calculations for einsum contractions.
eincode
Core data structures for einsum expressions and contraction trees.
expr_tree
Expression tree for simulated annealing optimization.
greedy
Greedy contraction order optimizer.
incidence_list
Hypergraph representation for the greedy contraction algorithm.
label
Label trait for tensor index labels.
score
Scoring function for contraction order optimization.
slicer
Slicing optimization for tensor network contraction.
treesa
TreeSA: Simulated Annealing optimizer for contraction order.
utils
Utility functions for numerical computations.

Traits§

CodeOptimizer
Trait for contraction order optimizers.

Functions§

contraction_complexity
Compute the contraction complexity of an optimized NestedEinsum.
optimize_code
Optimize the contraction order for an EinCode using the specified optimizer.