Skip to main content

omeco/
lib.rs

1//! # omeco - Tensor Network Contraction Order Optimization
2//!
3//! A Rust library for optimizing tensor network contraction orders, ported from
4//! the Julia package [OMEinsumContractionOrders.jl](https://github.com/TensorBFS/OMEinsumContractionOrders.jl).
5//!
6//! ## What is a Tensor Network?
7//!
8//! A *tensor network* represents multilinear transformations as hypergraphs.
9//! Arrays (tensors) are nodes, and shared indices are hyperedges connecting them.
10//! To *contract* a tensor network means evaluating the transformation by performing
11//! a sequence of pairwise tensor operations.
12//!
13//! The computational cost—both time and memory—depends critically on the order
14//! of these operations. A specific ordering is called a *contraction order*, and
15//! finding an efficient one is *contraction order optimization*.
16//!
17//! This framework appears across many domains: *einsum* notation in numerical
18//! computing, *factor graphs* in probabilistic inference, and *junction trees*
19//! in graphical models. Applications include quantum circuit simulation,
20//! quantum error correction, neural network compression, and combinatorial optimization.
21//!
22//! Finding the optimal contraction order is NP-complete, but good heuristics
23//! can find near-optimal solutions quickly.
24//!
25//! ## Features
26//!
27//! This crate provides two main features:
28//!
29//! 1. **Contraction Order Optimization** — Find efficient orderings that minimize
30//!    time and/or space complexity
31//! 2. **Slicing** — Trade time for space by looping over selected indices
32//!
33//! ### Feature 1: Contraction Order Optimization
34//!
35//! A contraction order is represented as a binary tree where leaves are input
36//! tensors and internal nodes are intermediate results. The optimizer searches
37//! for trees that minimize a cost function balancing multiple objectives:
38//!
39//! - **Time complexity (tc)**: Total FLOP count (log2 scale)
40//! - **Space complexity (sc)**: Maximum intermediate tensor size (log2 scale)
41//! - **Read-write complexity (rwc)**: Total memory I/O (log2 scale)
42//!
43//! ```rust
44//! use omeco::{EinCode, GreedyMethod, contraction_complexity, optimize_code, uniform_size_dict};
45//!
46//! // Matrix chain: A[i,j] × B[j,k] × C[k,l] → D[i,l]
47//! let code = EinCode::new(
48//!     vec![vec!['i', 'j'], vec!['j', 'k'], vec!['k', 'l']],
49//!     vec!['i', 'l'],
50//! );
51//! let sizes = uniform_size_dict(&code, 16);
52//!
53//! let optimized = optimize_code(&code, &sizes, &GreedyMethod::default())
54//!     .expect("optimizer failed");
55//! let metrics = contraction_complexity(&optimized, &sizes, &code.ixs);
56//! println!("time: 2^{:.2}", metrics.tc);
57//! println!("space: 2^{:.2}", metrics.sc);
58//! ```
59//!
60//! **Available optimizers:**
61//!
62//! | Optimizer | Description |
63//! |-----------|-------------|
64//! | [`GreedyMethod`] | Fast O(n² log n) greedy heuristic |
65//! | [`TreeSA`] | Simulated annealing for higher-quality solutions |
66//!
67//! Use [`GreedyMethod`] when you need speed; use [`TreeSA`] when contraction
68//! cost dominates and you can afford extra search time.
69//!
70//! ### Feature 2: Slicing
71//!
72//! *Slicing* trades time complexity for reduced space complexity by explicitly
73//! looping over a subset of tensor indices. This is useful when the optimal
74//! contraction order still exceeds available memory.
75//!
76//! For example, slicing index `j` with dimension 64 means running 64 smaller
77//! contractions and summing the results, reducing peak memory at the cost of
78//! more total work.
79//!
80//! ```rust
81//! use omeco::{EinCode, GreedyMethod, SlicedEinsum, optimize_code, sliced_complexity, uniform_size_dict};
82//!
83//! let code = EinCode::new(
84//!     vec![vec!['i', 'j'], vec!['j', 'k']],
85//!     vec!['i', 'k'],
86//! );
87//! let sizes = uniform_size_dict(&code, 64);
88//!
89//! let optimized = optimize_code(&code, &sizes, &GreedyMethod::default())
90//!     .expect("optimizer failed");
91//!
92//! // Slice over index 'j' to reduce memory
93//! let sliced = SlicedEinsum::new(vec!['j'], optimized);
94//! let metrics = sliced_complexity(&sliced, &sizes, &code.ixs);
95//! println!("sliced space: 2^{:.2}", metrics.sc);
96//! ```
97//!
98//! ## Algorithm Details
99//!
100//! ### GreedyMethod
101//!
102//! Repeatedly contracts the tensor pair with the lowest cost:
103//!
104//! ```text
105//! loss = size(output) - α × (size(input1) + size(input2))
106//! ```
107//!
108//! - `alpha` (0.0–1.0): Balances output size vs input size reduction
109//! - `temperature`: Enables stochastic selection via Boltzmann sampling (0 = deterministic)
110//!
111//! ### TreeSA
112//!
113//! Simulated annealing on contraction trees. Starts from an initial tree,
114//! applies local rewrites, and accepts/rejects via Metropolis criterion.
115//! Runs multiple trials in parallel using rayon.
116//!
117//! The scoring function balances objectives:
118//!
119//! ```text
120//! score = w_t × 2^tc + w_rw × 2^rwc + w_s × max(0, 2^sc - 2^sc_target)
121//! ```
122//!
123//! - `betas`: Inverse temperature schedule
124//! - `ntrials`: Parallel trials (control threads via `RAYON_NUM_THREADS`)
125//! - `niters`: Iterations per temperature level
126//! - `score`: [`ScoreFunction`] with weights and space target
127
128pub mod complexity;
129pub mod eincode;
130pub mod expr_tree;
131pub mod greedy;
132pub mod incidence_list;
133pub mod json;
134pub mod label;
135pub mod score;
136pub mod simplifier;
137pub mod slicer;
138pub mod treesa;
139pub mod utils;
140
141#[cfg(test)]
142pub mod test_utils;
143
144// Re-export main types
145pub use complexity::{
146    eincode_complexity, flop, nested_complexity, nested_flop, peak_memory, sliced_complexity,
147    ContractionComplexity,
148};
149pub use eincode::{log2_size_dict, uniform_size_dict, EinCode, NestedEinsum, SlicedEinsum};
150pub use greedy::{optimize_greedy, ContractionTree, GreedyMethod, GreedyResult};
151pub use label::Label;
152pub use score::ScoreFunction;
153pub use slicer::{slice_code, CodeSlicer, Slicer, TreeSASlicer};
154pub use treesa::{optimize_treesa, Initializer, TreeSA};
155
156use std::collections::HashMap;
157
158/// Trait for contraction order optimizers.
159pub trait CodeOptimizer {
160    /// Optimize the contraction order for an EinCode.
161    fn optimize<L: Label>(
162        &self,
163        code: &EinCode<L>,
164        size_dict: &HashMap<L, usize>,
165    ) -> Option<NestedEinsum<L>>;
166}
167
168impl CodeOptimizer for GreedyMethod {
169    fn optimize<L: Label>(
170        &self,
171        code: &EinCode<L>,
172        size_dict: &HashMap<L, usize>,
173    ) -> Option<NestedEinsum<L>> {
174        optimize_greedy(code, size_dict, self)
175    }
176}
177
178impl CodeOptimizer for TreeSA {
179    fn optimize<L: Label>(
180        &self,
181        code: &EinCode<L>,
182        size_dict: &HashMap<L, usize>,
183    ) -> Option<NestedEinsum<L>> {
184        optimize_treesa(code, size_dict, self)
185    }
186}
187
188/// Optimize the contraction order for an EinCode using the specified optimizer.
189///
190/// # Example
191///
192/// ```rust
193/// use omeco::{EinCode, optimize_code, GreedyMethod};
194/// use std::collections::HashMap;
195///
196/// let code = EinCode::new(
197///     vec![vec!['i', 'j'], vec!['j', 'k']],
198///     vec!['i', 'k']
199/// );
200///
201/// let mut sizes = HashMap::new();
202/// sizes.insert('i', 10);
203/// sizes.insert('j', 20);
204/// sizes.insert('k', 10);
205///
206/// let optimized = optimize_code(&code, &sizes, &GreedyMethod::default());
207/// assert!(optimized.is_some());
208/// ```
209pub fn optimize_code<L: Label, O: CodeOptimizer>(
210    code: &EinCode<L>,
211    size_dict: &HashMap<L, usize>,
212    optimizer: &O,
213) -> Option<NestedEinsum<L>> {
214    optimizer.optimize(code, size_dict)
215}
216
217/// Compute the contraction complexity of an optimized NestedEinsum.
218///
219/// This is a convenience function that wraps [`nested_complexity`].
220pub fn contraction_complexity<L: Label>(
221    code: &NestedEinsum<L>,
222    size_dict: &HashMap<L, usize>,
223    original_ixs: &[Vec<L>],
224) -> ContractionComplexity {
225    nested_complexity(code, size_dict, original_ixs)
226}
227
228#[cfg(test)]
229mod tests {
230    use super::*;
231
232    #[test]
233    fn test_optimize_code_greedy() {
234        let code = EinCode::new(
235            vec![vec!['i', 'j'], vec!['j', 'k'], vec!['k', 'l']],
236            vec!['i', 'l'],
237        );
238
239        let mut sizes = HashMap::new();
240        sizes.insert('i', 4);
241        sizes.insert('j', 8);
242        sizes.insert('k', 8);
243        sizes.insert('l', 4);
244
245        let result = optimize_code(&code, &sizes, &GreedyMethod::default());
246        assert!(result.is_some());
247
248        let nested = result.unwrap();
249        assert!(nested.is_binary());
250        assert_eq!(nested.leaf_count(), 3);
251    }
252
253    #[test]
254    fn test_optimize_code_treesa() {
255        let code = EinCode::new(vec![vec!['i', 'j'], vec!['j', 'k']], vec!['i', 'k']);
256
257        let mut sizes = HashMap::new();
258        sizes.insert('i', 4);
259        sizes.insert('j', 8);
260        sizes.insert('k', 4);
261
262        let result = optimize_code(&code, &sizes, &TreeSA::fast());
263        assert!(result.is_some());
264
265        let nested = result.unwrap();
266        assert!(nested.is_binary());
267    }
268
269    #[test]
270    fn test_contraction_complexity_wrapper() {
271        let code = EinCode::new(vec![vec!['i', 'j'], vec!['j', 'k']], vec!['i', 'k']);
272
273        let mut sizes = HashMap::new();
274        sizes.insert('i', 4);
275        sizes.insert('j', 8);
276        sizes.insert('k', 4);
277
278        let result = optimize_code(&code, &sizes, &GreedyMethod::default()).unwrap();
279        let complexity = contraction_complexity(&result, &sizes, &code.ixs);
280
281        assert!(complexity.tc > 0.0);
282        assert!(complexity.sc > 0.0);
283    }
284
285    #[test]
286    fn test_single_tensor() {
287        let code = EinCode::new(vec![vec!['i', 'j']], vec!['i', 'j']);
288
289        let mut sizes = HashMap::new();
290        sizes.insert('i', 4);
291        sizes.insert('j', 4);
292
293        let result = optimize_code(&code, &sizes, &GreedyMethod::default());
294        assert!(result.is_some());
295        assert!(result.unwrap().is_leaf());
296    }
297
298    #[test]
299    fn test_trace() {
300        let code = EinCode::new(
301            vec![vec!['i', 'j'], vec!['j', 'i']],
302            vec![], // Trace - no output
303        );
304
305        let mut sizes = HashMap::new();
306        sizes.insert('i', 4);
307        sizes.insert('j', 4);
308
309        let result = optimize_code(&code, &sizes, &GreedyMethod::default());
310        assert!(result.is_some());
311    }
312
313    #[test]
314    fn test_empty_code() {
315        let code: EinCode<char> = EinCode::new(vec![], vec![]);
316        let sizes: HashMap<char, usize> = HashMap::new();
317
318        let result = optimize_code(&code, &sizes, &GreedyMethod::default());
319        assert!(result.is_none());
320    }
321
322    #[test]
323    fn test_optimize_code_with_slicing() {
324        use crate::slicer::{slice_code, TreeSASlicer};
325
326        let code = EinCode::new(
327            vec![vec!['i', 'j'], vec!['j', 'k'], vec!['k', 'l']],
328            vec!['i', 'l'],
329        );
330
331        let mut sizes = HashMap::new();
332        sizes.insert('i', 4);
333        sizes.insert('j', 8);
334        sizes.insert('k', 8);
335        sizes.insert('l', 4);
336
337        // First optimize
338        let nested = optimize_code(&code, &sizes, &GreedyMethod::default()).unwrap();
339
340        // Then slice
341        let slicer = TreeSASlicer::fast();
342        let sliced = slice_code(&nested, &sizes, &slicer, &code.ixs);
343
344        // Verify sliced result exists (may or may not slice depending on sizes)
345        assert!(sliced.is_some());
346    }
347
348    #[test]
349    fn test_contraction_complexity_deep_tree() {
350        let code = EinCode::new(
351            vec![
352                vec!['a', 'b'],
353                vec!['b', 'c'],
354                vec!['c', 'd'],
355                vec!['d', 'e'],
356            ],
357            vec!['a', 'e'],
358        );
359
360        let mut sizes = HashMap::new();
361        sizes.insert('a', 2);
362        sizes.insert('b', 2);
363        sizes.insert('c', 2);
364        sizes.insert('d', 2);
365        sizes.insert('e', 2);
366
367        let nested = optimize_code(&code, &sizes, &GreedyMethod::default()).unwrap();
368        let complexity = contraction_complexity(&nested, &sizes, &code.ixs);
369
370        // Deep tree should have multiple contractions
371        assert!(complexity.tc > 0.0);
372        assert!(complexity.sc > 0.0);
373        assert!(complexity.rwc > 0.0);
374    }
375
376    #[test]
377    fn test_optimize_code_treesa_with_path_decomp() {
378        let code = EinCode::new(
379            vec![vec!['i', 'j'], vec!['j', 'k'], vec!['k', 'l']],
380            vec!['i', 'l'],
381        );
382
383        let mut sizes = HashMap::new();
384        sizes.insert('i', 4);
385        sizes.insert('j', 8);
386        sizes.insert('k', 8);
387        sizes.insert('l', 4);
388
389        let result = optimize_code(&code, &sizes, &TreeSA::path());
390        assert!(result.is_some());
391
392        let nested = result.unwrap();
393        // Path decomposition should produce a valid tree
394        assert!(nested.is_binary() || nested.is_leaf());
395    }
396}
397
398#[cfg(test)]
399mod issue_13_tests {
400    use super::*;
401
402    #[test]
403    fn test_issue_13_greedy_scalar_output() {
404        // Issue #13: optimize_code ignores final output indices (iy) in NestedEinsum
405        // A[i,j] @ B[j,k] -> scalar (empty iy)
406        let code = EinCode::new(vec![vec![0usize, 1], vec![1usize, 2]], vec![]);
407
408        let sizes: HashMap<usize, usize> = [(0, 2), (1, 2), (2, 2)].into();
409        let optimizer = GreedyMethod::default();
410
411        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
412
413        match &tree {
414            NestedEinsum::Node { eins, .. } => {
415                assert_eq!(
416                    eins.iy, code.iy,
417                    "Root node's iy should match requested output. Got {:?}, expected {:?}",
418                    eins.iy, code.iy
419                );
420            }
421            NestedEinsum::Leaf { .. } => {
422                panic!("Multi-tensor operation should not return Leaf");
423            }
424        }
425    }
426
427    #[test]
428    fn test_issue_13_greedy_partial_output() {
429        // A[i,j] @ B[j,k] -> [i] (only keep first index)
430        let code = EinCode::new(vec![vec![0usize, 1], vec![1usize, 2]], vec![0usize]);
431
432        let sizes: HashMap<usize, usize> = [(0, 2), (1, 2), (2, 2)].into();
433        let optimizer = GreedyMethod::default();
434
435        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
436
437        match &tree {
438            NestedEinsum::Node { eins, .. } => {
439                assert_eq!(
440                    eins.iy, code.iy,
441                    "Root node's iy should match requested output. Got {:?}, expected {:?}",
442                    eins.iy, code.iy
443                );
444            }
445            NestedEinsum::Leaf { .. } => {
446                panic!("Multi-tensor operation should not return Leaf");
447            }
448        }
449    }
450
451    #[test]
452    fn test_issue_13_treesa_scalar_output() {
453        // Issue #13: TreeSA should also respect final output indices
454        let code = EinCode::new(vec![vec![0usize, 1], vec![1usize, 2]], vec![]);
455
456        let sizes: HashMap<usize, usize> = [(0, 2), (1, 2), (2, 2)].into();
457        let optimizer = TreeSA::fast();
458
459        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
460
461        match &tree {
462            NestedEinsum::Node { eins, .. } => {
463                assert_eq!(
464                    eins.iy, code.iy,
465                    "TreeSA root node's iy should match requested output. Got {:?}, expected {:?}",
466                    eins.iy, code.iy
467                );
468            }
469            NestedEinsum::Leaf { .. } => {
470                panic!("Multi-tensor operation should not return Leaf");
471            }
472        }
473    }
474
475    #[test]
476    fn test_issue_13_treesa_partial_output() {
477        // A[i,j] @ B[j,k] -> [k] (only keep last index)
478        let code = EinCode::new(vec![vec![0usize, 1], vec![1usize, 2]], vec![2usize]);
479
480        let sizes: HashMap<usize, usize> = [(0, 2), (1, 2), (2, 2)].into();
481        let optimizer = TreeSA::fast();
482
483        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
484
485        match &tree {
486            NestedEinsum::Node { eins, .. } => {
487                assert_eq!(
488                    eins.iy, code.iy,
489                    "TreeSA root node's iy should match requested output. Got {:?}, expected {:?}",
490                    eins.iy, code.iy
491                );
492            }
493            NestedEinsum::Leaf { .. } => {
494                panic!("Multi-tensor operation should not return Leaf");
495            }
496        }
497    }
498
499    #[test]
500    fn test_issue_13_greedy_chain_scalar_output() {
501        // 3 tensors: A[i,j] @ B[j,k] @ C[k,l] -> scalar
502        // This exercises the level > 0 branch for intermediate nodes
503        let code = EinCode::new(
504            vec![vec![0usize, 1], vec![1usize, 2], vec![2usize, 3]],
505            vec![],
506        );
507
508        let sizes: HashMap<usize, usize> = [(0, 2), (1, 2), (2, 2), (3, 2)].into();
509        let optimizer = GreedyMethod::default();
510
511        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
512
513        // Verify root has requested output
514        match &tree {
515            NestedEinsum::Node { eins, .. } => {
516                assert_eq!(
517                    eins.iy, code.iy,
518                    "Root node's iy should be empty (scalar). Got {:?}",
519                    eins.iy
520                );
521            }
522            NestedEinsum::Leaf { .. } => {
523                panic!("3-tensor operation should not return Leaf");
524            }
525        }
526    }
527
528    #[test]
529    fn test_issue_13_greedy_chain_partial_output() {
530        // 3 tensors: A[i,j] @ B[j,k] @ C[k,l] -> [i] (only first index)
531        let code = EinCode::new(
532            vec![vec![0usize, 1], vec![1usize, 2], vec![2usize, 3]],
533            vec![0usize],
534        );
535
536        let sizes: HashMap<usize, usize> = [(0, 2), (1, 2), (2, 2), (3, 2)].into();
537        let optimizer = GreedyMethod::default();
538
539        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
540
541        match &tree {
542            NestedEinsum::Node { eins, .. } => {
543                assert_eq!(
544                    eins.iy, code.iy,
545                    "Root node's iy should be [0]. Got {:?}",
546                    eins.iy
547                );
548            }
549            NestedEinsum::Leaf { .. } => {
550                panic!("3-tensor operation should not return Leaf");
551            }
552        }
553    }
554
555    #[test]
556    fn test_issue_13_treesa_chain_scalar_output() {
557        // 3 tensors with TreeSA
558        let code = EinCode::new(
559            vec![vec![0usize, 1], vec![1usize, 2], vec![2usize, 3]],
560            vec![],
561        );
562
563        let sizes: HashMap<usize, usize> = [(0, 2), (1, 2), (2, 2), (3, 2)].into();
564        let optimizer = TreeSA::fast();
565
566        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
567
568        match &tree {
569            NestedEinsum::Node { eins, .. } => {
570                assert_eq!(
571                    eins.iy, code.iy,
572                    "TreeSA root node's iy should be empty (scalar). Got {:?}",
573                    eins.iy
574                );
575            }
576            NestedEinsum::Leaf { .. } => {
577                panic!("3-tensor operation should not return Leaf");
578            }
579        }
580    }
581
582    #[test]
583    fn test_issue_13_intermediate_nodes_have_correct_output() {
584        // Verify that intermediate nodes have hypergraph-computed output
585        // A[i,j] @ B[j,k] @ C[k,l] -> [i,l]
586        let code = EinCode::new(
587            vec![vec!['i', 'j'], vec!['j', 'k'], vec!['k', 'l']],
588            vec!['i', 'l'],
589        );
590
591        let sizes: HashMap<char, usize> = [('i', 2), ('j', 2), ('k', 2), ('l', 2)].into();
592        let optimizer = GreedyMethod::default();
593
594        let tree = optimize_code(&code, &sizes, &optimizer).expect("should optimize");
595
596        // Root should have requested output [i, l]
597        match &tree {
598            NestedEinsum::Node { eins, args, .. } => {
599                assert_eq!(eins.iy, vec!['i', 'l'], "Root iy should be [i, l]");
600
601                // Check that we have intermediate nodes (not just leaves)
602                let has_intermediate = args.iter().any(|arg| !arg.is_leaf());
603                assert!(
604                    has_intermediate,
605                    "Should have at least one intermediate node"
606                );
607            }
608            NestedEinsum::Leaf { .. } => {
609                panic!("3-tensor operation should not return Leaf");
610            }
611        }
612    }
613}
614
615#[cfg(test)]
616mod numerical_verification_tests {
617    //! Tests that verify numerical correctness of contraction orders.
618    //! These tests execute contractions using NaiveContractor and compare results.
619
620    use super::*;
621    use crate::test_utils::{execute_nested, tensors_approx_equal, NaiveContractor};
622
623    #[test]
624    fn test_numerical_matrix_chain() {
625        // Verify A[i,j] * B[j,k] * C[k,l] * D[l,m] produces same result
626        // regardless of contraction order (greedy vs treesa)
627        let code = EinCode::new(
628            vec![vec![1usize, 2], vec![2, 3], vec![3, 4], vec![4, 5]],
629            vec![1, 5],
630        );
631
632        let sizes: HashMap<usize, usize> = [(1, 3), (2, 4), (3, 5), (4, 4), (5, 3)].into();
633        let label_map: HashMap<usize, usize> = (1..=5).map(|i| (i, i)).collect();
634
635        // Setup identical tensors for both contractors
636        let mut contractor_greedy = NaiveContractor::new();
637        contractor_greedy.add_tensor(0, vec![3, 4]); // A
638        contractor_greedy.add_tensor(1, vec![4, 5]); // B
639        contractor_greedy.add_tensor(2, vec![5, 4]); // C
640        contractor_greedy.add_tensor(3, vec![4, 3]); // D
641
642        let mut contractor_treesa = contractor_greedy.clone();
643
644        // Optimize with both methods
645        let greedy_tree =
646            optimize_code(&code, &sizes, &GreedyMethod::default()).expect("Greedy should succeed");
647        let treesa_tree =
648            optimize_code(&code, &sizes, &TreeSA::fast()).expect("TreeSA should succeed");
649
650        // Execute both
651        let greedy_idx = execute_nested(&greedy_tree, &mut contractor_greedy, &label_map);
652        let treesa_idx = execute_nested(&treesa_tree, &mut contractor_treesa, &label_map);
653
654        // Compare results
655        let greedy_result = contractor_greedy.get_tensor(greedy_idx).unwrap();
656        let treesa_result = contractor_treesa.get_tensor(treesa_idx).unwrap();
657
658        assert_eq!(
659            greedy_result.shape(),
660            treesa_result.shape(),
661            "Shapes should match"
662        );
663        assert!(
664            tensors_approx_equal(greedy_result, treesa_result, 1e-10, 1e-12),
665            "Greedy and TreeSA should produce identical numerical results"
666        );
667    }
668
669    #[test]
670    fn test_numerical_with_hyperedge() {
671        // A[i,j], B[j,k], C[j,l] -> [i,k,l] where j is a hyperedge
672        // Note: This test verifies that optimization produces valid trees,
673        // but different contraction orders may produce numerically different
674        // results due to floating point accumulation differences.
675        let code = EinCode::new(vec![vec![1usize, 2], vec![2, 3], vec![2, 4]], vec![1, 3, 4]);
676
677        let sizes: HashMap<usize, usize> = [(1, 2), (2, 3), (3, 2), (4, 2)].into();
678        let label_map: HashMap<usize, usize> = (1..=4).map(|i| (i, i)).collect();
679
680        let mut contractor = NaiveContractor::new();
681        contractor.add_tensor(0, vec![2, 3]); // A[i,j]
682        contractor.add_tensor(1, vec![3, 2]); // B[j,k]
683        contractor.add_tensor(2, vec![3, 2]); // C[j,l]
684
685        let greedy_tree =
686            optimize_code(&code, &sizes, &GreedyMethod::default()).expect("Greedy should succeed");
687        let treesa_tree =
688            optimize_code(&code, &sizes, &TreeSA::fast()).expect("TreeSA should succeed");
689
690        // Both should produce valid binary trees
691        assert!(greedy_tree.is_binary(), "Greedy should produce binary tree");
692        assert!(treesa_tree.is_binary(), "TreeSA should produce binary tree");
693
694        // Both should have correct number of leaves
695        assert_eq!(greedy_tree.leaf_count(), 3, "Should have 3 leaves");
696        assert_eq!(treesa_tree.leaf_count(), 3, "Should have 3 leaves");
697
698        // Execute greedy tree and verify output shape
699        let greedy_idx = execute_nested(&greedy_tree, &mut contractor, &label_map);
700        let greedy_result = contractor.get_tensor(greedy_idx).unwrap();
701        assert_eq!(
702            greedy_result.shape(),
703            &[2, 2, 2],
704            "Output should be 2x2x2 for [i,k,l]"
705        );
706    }
707
708    #[test]
709    fn test_numerical_scalar_output() {
710        // Full contraction to scalar: A[i,j] * B[j,k] * C[k,i] -> scalar
711        let code = EinCode::new(vec![vec![1usize, 2], vec![2, 3], vec![3, 1]], vec![]);
712
713        let sizes: HashMap<usize, usize> = [(1, 3), (2, 4), (3, 3)].into();
714        let label_map: HashMap<usize, usize> = (1..=3).map(|i| (i, i)).collect();
715
716        let mut contractor_greedy = NaiveContractor::new();
717        contractor_greedy.add_tensor(0, vec![3, 4]); // A
718        contractor_greedy.add_tensor(1, vec![4, 3]); // B
719        contractor_greedy.add_tensor(2, vec![3, 3]); // C
720
721        let mut contractor_treesa = contractor_greedy.clone();
722
723        let greedy_tree =
724            optimize_code(&code, &sizes, &GreedyMethod::default()).expect("Greedy should succeed");
725        let treesa_tree =
726            optimize_code(&code, &sizes, &TreeSA::fast()).expect("TreeSA should succeed");
727
728        let greedy_idx = execute_nested(&greedy_tree, &mut contractor_greedy, &label_map);
729        let treesa_idx = execute_nested(&treesa_tree, &mut contractor_treesa, &label_map);
730
731        let greedy_result = contractor_greedy.get_tensor(greedy_idx).unwrap();
732        let treesa_result = contractor_treesa.get_tensor(treesa_idx).unwrap();
733
734        assert_eq!(greedy_result.ndim(), 0, "Should produce scalar");
735        assert!(
736            tensors_approx_equal(greedy_result, treesa_result, 1e-10, 1e-12),
737            "Scalar contraction should match"
738        );
739    }
740
741    #[test]
742    fn test_numerical_five_tensor_network() {
743        // More complex network: 5 tensors with various connections
744        // A[a,b], B[b,c,d], C[c,e], D[d,e,f], E[f,a] -> [e] (partial trace)
745        let code = EinCode::new(
746            vec![
747                vec![1usize, 2], // A[a,b]
748                vec![2, 3, 4],   // B[b,c,d]
749                vec![3, 5],      // C[c,e]
750                vec![4, 5, 6],   // D[d,e,f]
751                vec![6, 1],      // E[f,a]
752            ],
753            vec![5], // Output only 'e'
754        );
755
756        let sizes: HashMap<usize, usize> = [(1, 2), (2, 3), (3, 2), (4, 2), (5, 4), (6, 2)].into();
757        let label_map: HashMap<usize, usize> = (1..=6).map(|i| (i, i)).collect();
758
759        let mut contractor_greedy = NaiveContractor::new();
760        contractor_greedy.add_tensor(0, vec![2, 3]); // A
761        contractor_greedy.add_tensor(1, vec![3, 2, 2]); // B
762        contractor_greedy.add_tensor(2, vec![2, 4]); // C
763        contractor_greedy.add_tensor(3, vec![2, 4, 2]); // D
764        contractor_greedy.add_tensor(4, vec![2, 2]); // E
765
766        let mut contractor_treesa = contractor_greedy.clone();
767
768        let greedy_tree =
769            optimize_code(&code, &sizes, &GreedyMethod::default()).expect("Greedy should succeed");
770        let treesa_tree =
771            optimize_code(&code, &sizes, &TreeSA::fast()).expect("TreeSA should succeed");
772
773        let greedy_idx = execute_nested(&greedy_tree, &mut contractor_greedy, &label_map);
774        let treesa_idx = execute_nested(&treesa_tree, &mut contractor_treesa, &label_map);
775
776        let greedy_result = contractor_greedy.get_tensor(greedy_idx).unwrap();
777        let treesa_result = contractor_treesa.get_tensor(treesa_idx).unwrap();
778
779        assert_eq!(greedy_result.shape(), &[4], "Output should have shape [4]");
780        assert!(
781            tensors_approx_equal(greedy_result, treesa_result, 1e-8, 1e-10),
782            "5-tensor network should produce same result"
783        );
784    }
785}
786
787/// Large-scale stress tests using petgraph for graph generation.
788/// These tests match Julia's OMEinsumContractionOrders test coverage.
789#[cfg(test)]
790mod large_scale_stress_tests {
791    use super::*;
792    use petgraph::graph::UnGraph;
793    use petgraph::visit::EdgeRef;
794    use rand::seq::SliceRandom;
795    use rand::SeedableRng;
796    use std::collections::HashSet;
797
798    /// Generate a random k-regular graph with n vertices.
799    /// Uses a greedy edge-swapping algorithm that guarantees exactly k-regularity.
800    fn generate_random_regular_graph(n: usize, k: usize, seed: u64) -> UnGraph<(), ()> {
801        use rand::Rng;
802        let mut rng = rand::rngs::StdRng::seed_from_u64(seed);
803        let mut graph = UnGraph::new_undirected();
804
805        // Add vertices
806        for _ in 0..n {
807            graph.add_node(());
808        }
809
810        // For k-regular graph: each vertex needs exactly k edges
811        // Total edges = n*k/2 (each edge counted twice)
812        assert!(n * k % 2 == 0, "n*k must be even for k-regular graph");
813
814        // Use configuration model with retry
815        let target_edges = (n * k) / 2;
816        let mut edges: HashSet<(usize, usize)> = HashSet::new();
817        let mut degrees = vec![0usize; n];
818
819        // Greedy construction: repeatedly add edges between vertices with lowest degree
820        let max_outer_attempts = 20;
821        for _attempt in 0..max_outer_attempts {
822            edges.clear();
823            degrees.fill(0);
824
825            // Build edges greedily
826            for _ in 0..target_edges * 10 {
827                // Find vertices that still need edges
828                let mut candidates: Vec<usize> = (0..n).filter(|&v| degrees[v] < k).collect();
829
830                if candidates.len() < 2 {
831                    break;
832                }
833
834                // Try random pairs
835                for _ in 0..100 {
836                    if candidates.len() < 2 {
837                        break;
838                    }
839                    let i = rng.random_range(0..candidates.len());
840                    let u = candidates[i];
841                    candidates.swap_remove(i);
842
843                    if candidates.is_empty() {
844                        break;
845                    }
846
847                    // Find a valid partner for u
848                    let valid_partners: Vec<usize> = candidates
849                        .iter()
850                        .filter(|&&v| {
851                            let edge = (u.min(v), u.max(v));
852                            !edges.contains(&edge) && degrees[v] < k
853                        })
854                        .copied()
855                        .collect();
856
857                    if valid_partners.is_empty() {
858                        continue;
859                    }
860
861                    let j = rng.random_range(0..valid_partners.len());
862                    let v = valid_partners[j];
863
864                    let edge = (u.min(v), u.max(v));
865                    edges.insert(edge);
866                    degrees[u] += 1;
867                    degrees[v] += 1;
868
869                    // Remove v from candidates if it's now full
870                    if degrees[v] >= k {
871                        if let Some(pos) = candidates.iter().position(|&x| x == v) {
872                            candidates.swap_remove(pos);
873                        }
874                    }
875
876                    break;
877                }
878
879                if edges.len() >= target_edges {
880                    break;
881                }
882            }
883
884            // Check if we got a valid k-regular graph
885            if edges.len() == target_edges && degrees.iter().all(|&d| d == k) {
886                // Success! Add edges to graph
887                for (u, v) in &edges {
888                    graph.add_edge(
889                        petgraph::graph::NodeIndex::new(*u),
890                        petgraph::graph::NodeIndex::new(*v),
891                        (),
892                    );
893                }
894                return graph;
895            }
896        }
897
898        // Fallback: build a near-regular graph by connecting vertices greedily
899        // This ensures we always return a connected graph even if not perfectly k-regular
900        edges.clear();
901        degrees.fill(0);
902
903        let mut available: Vec<usize> = (0..n).collect();
904        available.shuffle(&mut rng);
905
906        for (i, &u) in available.iter().enumerate() {
907            for &v in available.iter().skip(i + 1) {
908                if degrees[u] < k && degrees[v] < k {
909                    let edge = (u.min(v), u.max(v));
910                    if !edges.contains(&edge) {
911                        edges.insert(edge);
912                        degrees[u] += 1;
913                        degrees[v] += 1;
914                    }
915                }
916                if degrees[u] >= k {
917                    break;
918                }
919            }
920        }
921
922        for (u, v) in &edges {
923            graph.add_edge(
924                petgraph::graph::NodeIndex::new(*u),
925                petgraph::graph::NodeIndex::new(*v),
926                (),
927            );
928        }
929
930        graph
931    }
932
933    /// Convert a petgraph to EinCode format for tensor network optimization.
934    /// Each edge becomes a matrix tensor, each vertex becomes a vector tensor.
935    fn graph_to_eincode(graph: &UnGraph<(), ()>) -> (EinCode<usize>, HashMap<usize, usize>) {
936        let n_vertices = graph.node_count();
937        let mut ixs: Vec<Vec<usize>> = Vec::new();
938
939        // Each edge contributes a matrix tensor: [src, dst] (using minmax for consistency)
940        for edge in graph.edge_references() {
941            let src = edge.source().index() + 1; // 1-indexed
942            let dst = edge.target().index() + 1;
943            ixs.push(vec![src.min(dst), src.max(dst)]);
944        }
945
946        // Each vertex contributes a vector tensor: [vertex]
947        for v in 0..n_vertices {
948            ixs.push(vec![v + 1]); // 1-indexed
949        }
950
951        // Create uniform size dictionary
952        let mut sizes = HashMap::new();
953        for v in 1..=n_vertices {
954            sizes.insert(v, 2);
955        }
956
957        let code = EinCode::new(ixs, vec![]); // Scalar output
958        (code, sizes)
959    }
960
961    #[test]
962    fn test_random_regular_graph_n40_k3() {
963        // Julia test: n=40, k=3 random regular graph
964        let graph = generate_random_regular_graph(40, 3, 42);
965
966        // Should have 40 vertices and ~60 edges (n*k/2 = 60)
967        assert_eq!(graph.node_count(), 40);
968        assert!(graph.edge_count() > 50, "Should have close to 60 edges");
969
970        let (code, sizes) = graph_to_eincode(&graph);
971
972        // Optimize with greedy
973        let greedy_tree = optimize_code(&code, &sizes, &GreedyMethod::default());
974        assert!(
975            greedy_tree.is_some(),
976            "Greedy should succeed on 40-node graph"
977        );
978
979        let tree = greedy_tree.unwrap();
980        let complexity = contraction_complexity(&tree, &sizes, &code.ixs);
981
982        // Verify optimization produces reasonable complexity
983        assert!(
984            complexity.sc < 30.0,
985            "Space complexity should be reasonable"
986        );
987    }
988
989    #[test]
990    fn test_random_regular_graph_n60_k3_treesa() {
991        // Julia test: n=60, k=3 random regular graph with TreeSA
992        let graph = generate_random_regular_graph(60, 3, 123);
993        let (code, sizes) = graph_to_eincode(&graph);
994
995        // First optimize with greedy
996        let greedy_tree = optimize_code(&code, &sizes, &GreedyMethod::default());
997        assert!(greedy_tree.is_some());
998        let greedy_complexity =
999            contraction_complexity(greedy_tree.as_ref().unwrap(), &sizes, &code.ixs);
1000
1001        // Then optimize with TreeSA using sc_target
1002        let treesa = TreeSA::default()
1003            .with_sc_target(greedy_complexity.sc - 2.0)
1004            .with_niters(50)
1005            .with_ntrials(2);
1006        let treesa_tree = optimize_code(&code, &sizes, &treesa);
1007        assert!(
1008            treesa_tree.is_some(),
1009            "TreeSA should succeed on 60-node graph"
1010        );
1011
1012        let treesa_complexity =
1013            contraction_complexity(treesa_tree.as_ref().unwrap(), &sizes, &code.ixs);
1014
1015        // TreeSA should match or improve on greedy
1016        assert!(
1017            treesa_complexity.sc <= greedy_complexity.sc + 1.0,
1018            "TreeSA should not significantly increase space complexity"
1019        );
1020    }
1021
1022    #[test]
1023    fn test_large_random_regular_graph_n100() {
1024        // Stress test: 100-node 3-regular graph
1025        let graph = generate_random_regular_graph(100, 3, 456);
1026        let (code, sizes) = graph_to_eincode(&graph);
1027
1028        // Should have ~150 edges + 100 vertices = ~250 tensors
1029        assert!(code.num_tensors() > 200, "Should have many tensors");
1030
1031        let greedy_tree = optimize_code(&code, &sizes, &GreedyMethod::default());
1032        assert!(greedy_tree.is_some(), "Greedy should handle 100-node graph");
1033
1034        let tree = greedy_tree.unwrap();
1035        assert!(tree.is_binary(), "Should produce binary tree");
1036        assert_eq!(
1037            tree.leaf_count(),
1038            code.num_tensors(),
1039            "Should have correct leaf count"
1040        );
1041    }
1042
1043    #[test]
1044    fn test_path_decomposition_random_graph_n50() {
1045        // Julia test: path decomposition on n=50 random regular graph
1046        let graph = generate_random_regular_graph(50, 3, 789);
1047        let (code, sizes) = graph_to_eincode(&graph);
1048
1049        // Optimize with path decomposition
1050        let path_treesa = TreeSA::path().with_niters(30).with_ntrials(2);
1051        let path_tree = optimize_code(&code, &sizes, &path_treesa);
1052        assert!(path_tree.is_some(), "Path TreeSA should succeed");
1053
1054        let tree = path_tree.unwrap();
1055
1056        // Path decomposition should produce a path structure
1057        assert!(
1058            tree.is_path_decomposition(),
1059            "Should produce path decomposition"
1060        );
1061
1062        // Also verify with tree decomposition for comparison
1063        let tree_treesa = TreeSA::default().with_niters(30).with_ntrials(2);
1064        let tree_tree = optimize_code(&code, &sizes, &tree_treesa);
1065        assert!(tree_tree.is_some());
1066
1067        // Tree decomposition typically doesn't produce path structure
1068        // (unless by coincidence)
1069    }
1070
1071    #[test]
1072    fn test_fullerene_c60_optimization() {
1073        // Julia test: C60 fullerene graph (60 vertices, 90 edges)
1074        use crate::test_utils::generate_fullerene_edges;
1075
1076        let edges = generate_fullerene_edges();
1077        let mut ixs: Vec<Vec<usize>> = Vec::new();
1078
1079        // Add edge tensors
1080        for (a, b) in &edges {
1081            ixs.push(vec![*a, *b]);
1082        }
1083
1084        // Add vertex tensors
1085        for v in 1..=60 {
1086            ixs.push(vec![v]);
1087        }
1088
1089        let code = EinCode::new(ixs, vec![]); // Scalar output
1090        let sizes: HashMap<usize, usize> = (1..=60).map(|v| (v, 2)).collect();
1091
1092        // Unoptimized complexity
1093        let unopt = crate::complexity::eincode_complexity(&code, &sizes);
1094        assert_eq!(
1095            unopt.tc, 60.0,
1096            "Unoptimized tc should be 60 (sum of indices)"
1097        );
1098        assert_eq!(unopt.sc, 0.0, "Unoptimized sc should be 0 (scalar output)");
1099
1100        // Optimize
1101        let greedy_tree = optimize_code(&code, &sizes, &GreedyMethod::default());
1102        assert!(greedy_tree.is_some(), "Should optimize C60");
1103
1104        let tree = greedy_tree.unwrap();
1105        let complexity = contraction_complexity(&tree, &sizes, &code.ixs);
1106
1107        // Verify reasonable optimization (actual value depends on exact graph structure)
1108        assert!(
1109            complexity.sc <= 20.0,
1110            "C60 space complexity should be <= 20, got {}",
1111            complexity.sc
1112        );
1113    }
1114
1115    #[test]
1116    fn test_chain_and_ring_topologies() {
1117        // Julia test: chain and ring graphs
1118
1119        // Chain: 9 tensors forming a path
1120        let chain_ixs: Vec<Vec<usize>> = (1..=9).map(|i| vec![i, i + 1]).collect();
1121        let chain_code = EinCode::new(chain_ixs, vec![1, 10]);
1122        let chain_sizes: HashMap<usize, usize> = (1..=10).map(|v| (v, 2)).collect();
1123
1124        let chain_tree = optimize_code(&chain_code, &chain_sizes, &GreedyMethod::default());
1125        assert!(chain_tree.is_some());
1126        let chain_complexity =
1127            contraction_complexity(chain_tree.as_ref().unwrap(), &chain_sizes, &chain_code.ixs);
1128        assert_eq!(chain_complexity.sc, 2.0, "Chain should have sc=2");
1129
1130        // Ring: 10 tensors forming a cycle
1131        let mut ring_ixs: Vec<Vec<usize>> = (1..=9).map(|i| vec![i, i + 1]).collect();
1132        ring_ixs.push(vec![10, 1]); // Close the ring
1133        let ring_code = EinCode::new(ring_ixs, vec![]);
1134        let ring_sizes: HashMap<usize, usize> = (1..=10).map(|v| (v, 2)).collect();
1135
1136        let ring_tree = optimize_code(&ring_code, &ring_sizes, &GreedyMethod::default());
1137        assert!(ring_tree.is_some());
1138        let ring_complexity =
1139            contraction_complexity(ring_tree.as_ref().unwrap(), &ring_sizes, &ring_code.ixs);
1140        assert_eq!(ring_complexity.sc, 2.0, "Ring should have sc=2");
1141    }
1142
1143    #[test]
1144    fn test_tutte_graph_stochastic_optimization() {
1145        // Julia test: Tutte graph (46 vertices, 69 edges) with stochastic optimization
1146        use crate::test_utils::generate_tutte_edges;
1147
1148        let edges = generate_tutte_edges();
1149        let mut ixs: Vec<Vec<usize>> = Vec::new();
1150
1151        // Add unique edges
1152        let mut seen_edges: HashSet<(usize, usize)> = HashSet::new();
1153        for (a, b) in &edges {
1154            let edge = ((*a).min(*b), (*a).max(*b));
1155            if !seen_edges.contains(&edge) {
1156                ixs.push(vec![edge.0, edge.1]);
1157                seen_edges.insert(edge);
1158            }
1159        }
1160
1161        // Find max vertex
1162        let max_vertex = edges.iter().flat_map(|(a, b)| [*a, *b]).max().unwrap_or(0);
1163
1164        let code = EinCode::new(ixs, vec![]); // Scalar output
1165        let sizes: HashMap<usize, usize> = (1..=max_vertex).map(|v| (v, 2)).collect();
1166
1167        // Run stochastic optimization multiple times
1168        let mut best_sc = f64::INFINITY;
1169        for _trial in 0..10 {
1170            let stochastic_greedy = GreedyMethod::stochastic(100.0);
1171            if let Some(tree) = optimize_code(&code, &sizes, &stochastic_greedy) {
1172                let complexity = contraction_complexity(&tree, &sizes, &code.ixs);
1173                if complexity.sc < best_sc {
1174                    best_sc = complexity.sc;
1175                }
1176            }
1177        }
1178
1179        // Julia expects minimum sc <= 5 after multiple trials
1180        assert!(
1181            best_sc <= 8.0,
1182            "Best Tutte graph sc should be <= 8, got {}",
1183            best_sc
1184        );
1185    }
1186
1187    #[test]
1188    fn test_petersen_graph_optimization() {
1189        // Petersen graph: 10 vertices, 15 edges, 3-regular
1190        use crate::test_utils::generate_petersen_edges;
1191
1192        let edges = generate_petersen_edges();
1193        let mut ixs: Vec<Vec<usize>> = Vec::new();
1194
1195        for (a, b) in &edges {
1196            ixs.push(vec![*a, *b]);
1197        }
1198
1199        // Add vertex tensors
1200        for v in 1..=10 {
1201            ixs.push(vec![v]);
1202        }
1203
1204        let code = EinCode::new(ixs, vec![]);
1205        let sizes: HashMap<usize, usize> = (1..=10).map(|v| (v, 2)).collect();
1206
1207        let tree = optimize_code(&code, &sizes, &GreedyMethod::default());
1208        assert!(tree.is_some());
1209
1210        let complexity = contraction_complexity(tree.as_ref().unwrap(), &sizes, &code.ixs);
1211
1212        // Petersen graph should have reasonable complexity
1213        assert!(
1214            complexity.sc <= 6.0,
1215            "Petersen sc should be <= 6, got {}",
1216            complexity.sc
1217        );
1218    }
1219
1220    #[test]
1221    fn test_very_large_graph_n200() {
1222        // Stress test: 200-node graph (approaching Julia's n=220 test)
1223        let graph = generate_random_regular_graph(200, 3, 999);
1224        let (code, sizes) = graph_to_eincode(&graph);
1225
1226        // This is a large problem: ~300 edges + 200 vertices = ~500 tensors
1227        assert!(code.num_tensors() > 400, "Should have many tensors");
1228
1229        // Greedy should still work
1230        let greedy_tree = optimize_code(&code, &sizes, &GreedyMethod::default());
1231        assert!(greedy_tree.is_some(), "Greedy should handle 200-node graph");
1232
1233        let tree = greedy_tree.unwrap();
1234        let complexity = contraction_complexity(&tree, &sizes, &code.ixs);
1235
1236        // Verify reasonable complexity (Julia gets sc ~32 for n=220)
1237        assert!(
1238            complexity.sc <= 55.0,
1239            "Large graph sc should be <= 55, got {}",
1240            complexity.sc
1241        );
1242    }
1243
1244    #[test]
1245    fn test_reg3_220_treesa() {
1246        // =========================================================================
1247        // ALIGNED WITH JULIA - DO NOT MODIFY WITHOUT CHECKING JULIA TESTS
1248        // Julia "sa tree" test in test/treesa.jl:
1249        //   Random.seed!(2)
1250        //   code = random_regular_eincode(220, 3)
1251        //   res = optimize_greedy(code, uniformsize(code, 2); α=0.0, temperature=0.0)
1252        //   optcode = optimize_tree(res, uniformsize(code, 2);
1253        //       βs=0.1:0.05:20.0, ntrials=2, niters=10,
1254        //       initializer=:greedy, score=ScoreFunction(sc_target=32))
1255        //   @test cc.sc <= 32
1256        //
1257        // NOTE: Uses shared graph file benchmarks/graphs/reg3_220.json generated
1258        // by Julia with Random.seed!(2) to ensure identical graph structure.
1259        // =========================================================================
1260
1261        // Load graph from shared JSON file (generated by Julia with Random.seed!(2))
1262        let graph_json = include_str!("../../benchmarks/graphs/reg3_220.json");
1263        let graph_data: serde_json::Value = serde_json::from_str(graph_json).unwrap();
1264        let edge_list = graph_data["edge_list"].as_array().unwrap();
1265
1266        // Build EinCode from edge list
1267        let mut ixs: Vec<Vec<usize>> = Vec::new();
1268        for edge in edge_list {
1269            let arr = edge.as_array().unwrap();
1270            let a = arr[0].as_u64().unwrap() as usize;
1271            let b = arr[1].as_u64().unwrap() as usize;
1272            ixs.push(vec![a, b]);
1273        }
1274        // Add vertex tensors (1..=220)
1275        for v in 1..=220 {
1276            ixs.push(vec![v]);
1277        }
1278
1279        let code = EinCode::new(ixs, vec![]);
1280
1281        // Debug: check unique labels
1282        let unique_labels = code.unique_labels();
1283        eprintln!("Unique labels: {}", unique_labels.len());
1284        eprintln!("First 5 tensors: {:?}", &code.ixs[0..5]);
1285        eprintln!("Last 5 tensors: {:?}", &code.ixs[545..550]);
1286
1287        let sizes: HashMap<usize, usize> = unique_labels.iter().map(|&v| (v, 2)).collect();
1288
1289        // This is a large problem: 330 edges + 220 vertices = 550 tensors
1290        assert_eq!(code.num_tensors(), 550, "Should have 550 tensors");
1291
1292        // First get greedy baseline (Julia: res = optimize_greedy(...))
1293        let greedy_tree = optimize_code(&code, &sizes, &GreedyMethod::default());
1294        assert!(greedy_tree.is_some(), "Greedy should handle 220-node graph");
1295
1296        let greedy_complexity =
1297            contraction_complexity(greedy_tree.as_ref().unwrap(), &sizes, &code.ixs);
1298        eprintln!(
1299            "Greedy: tc={:.2}, sc={:.2}",
1300            greedy_complexity.tc, greedy_complexity.sc
1301        );
1302
1303        // TreeSA with sc_target=32 (matching Julia exactly)
1304        // Julia: βs=0.1:0.05:20.0 = 399 values, ntrials=2, niters=10
1305        let treesa = TreeSA::default()
1306            .with_betas((0..399).map(|i| 0.1 + 0.05 * i as f64).collect())
1307            .with_ntrials(2)
1308            .with_niters(10)
1309            .with_sc_target(32.0);
1310        let treesa_tree = optimize_code(&code, &sizes, &treesa);
1311        assert!(treesa_tree.is_some(), "TreeSA should succeed");
1312
1313        let treesa_complexity =
1314            contraction_complexity(treesa_tree.as_ref().unwrap(), &sizes, &code.ixs);
1315
1316        eprintln!(
1317            "TreeSA: tc={:.2}, sc={:.2}",
1318            treesa_complexity.tc, treesa_complexity.sc
1319        );
1320
1321        // Julia test: @test cc.sc <= 32
1322        assert!(
1323            treesa_complexity.sc <= 32.0,
1324            "TreeSA with sc_target=32 should achieve sc <= 32, got {}",
1325            treesa_complexity.sc
1326        );
1327    }
1328
1329    #[test]
1330    fn test_treesa_with_sc_target_large_graph() {
1331        // Julia test: TreeSA with sc_target on large graph
1332        let graph = generate_random_regular_graph(80, 3, 111);
1333        let (code, sizes) = graph_to_eincode(&graph);
1334
1335        // Get greedy baseline
1336        let greedy_tree = optimize_code(&code, &sizes, &GreedyMethod::default()).unwrap();
1337        let greedy_complexity = contraction_complexity(&greedy_tree, &sizes, &code.ixs);
1338
1339        // Use TreeSA with aggressive sc_target
1340        let sc_target = greedy_complexity.sc - 3.0;
1341        let treesa = TreeSA::default()
1342            .with_sc_target(sc_target)
1343            .with_niters(100)
1344            .with_ntrials(3);
1345
1346        let treesa_tree = optimize_code(&code, &sizes, &treesa);
1347        assert!(treesa_tree.is_some());
1348
1349        let treesa_complexity =
1350            contraction_complexity(treesa_tree.as_ref().unwrap(), &sizes, &code.ixs);
1351
1352        // TreeSA should try to meet the target
1353        // (may not always succeed, but should be close)
1354        assert!(
1355            treesa_complexity.sc <= greedy_complexity.sc + 2.0,
1356            "TreeSA should not be much worse than greedy"
1357        );
1358    }
1359
1360    #[test]
1361    fn test_multiple_optimizers_consistency() {
1362        // Verify that different optimizers produce valid results on same problem
1363        let graph = generate_random_regular_graph(30, 3, 222);
1364        let (code, sizes) = graph_to_eincode(&graph);
1365
1366        // Helper to verify optimizer results
1367        fn verify_optimizer_result(
1368            tree: Option<NestedEinsum<usize>>,
1369            code: &EinCode<usize>,
1370            sizes: &HashMap<usize, usize>,
1371            name: &str,
1372        ) {
1373            assert!(tree.is_some(), "{} should succeed", name);
1374            let t = tree.unwrap();
1375            assert!(t.is_binary(), "{} should produce binary tree", name);
1376            assert_eq!(
1377                t.leaf_count(),
1378                code.num_tensors(),
1379                "{} should have correct leaves",
1380                name
1381            );
1382
1383            let complexity = contraction_complexity(&t, sizes, &code.ixs);
1384            assert!(
1385                complexity.tc > 0.0,
1386                "{} should have positive time complexity",
1387                name
1388            );
1389            assert!(
1390                complexity.sc >= 0.0,
1391                "{} space complexity should be non-negative",
1392                name
1393            );
1394        }
1395
1396        // Test each optimizer
1397        verify_optimizer_result(
1398            optimize_code(&code, &sizes, &GreedyMethod::default()),
1399            &code,
1400            &sizes,
1401            "GreedyMethod::default",
1402        );
1403        verify_optimizer_result(
1404            optimize_code(&code, &sizes, &GreedyMethod::stochastic(10.0)),
1405            &code,
1406            &sizes,
1407            "GreedyMethod::stochastic",
1408        );
1409        verify_optimizer_result(
1410            optimize_code(&code, &sizes, &TreeSA::fast()),
1411            &code,
1412            &sizes,
1413            "TreeSA::fast",
1414        );
1415        verify_optimizer_result(
1416            optimize_code(&code, &sizes, &TreeSA::path()),
1417            &code,
1418            &sizes,
1419            "TreeSA::path",
1420        );
1421    }
1422}