tensorlogic-infer
Engine-agnostic execution traits, optimization utilities, and planning API for TensorLogic.
Overview
tensorlogic-infer provides the abstract execution interface and comprehensive optimization infrastructure for TensorLogic backends. This crate defines traits that backends must implement, along with powerful utilities for optimization, scheduling, profiling, and memory management.
Key Components
Core Execution Traits
- TlExecutor: Basic forward execution of compiled graphs
- TlAutodiff: Forward/backward pass for automatic differentiation
- TlEagerAutodiff: 🆕 Eager mode autodiff with dynamic graph building
- TlBatchExecutor: Efficient batch execution with parallel support
- TlStreamingExecutor: Streaming execution for large datasets
- TlCompilableExecutor: Ahead-of-time graph compilation support
- TlCapabilities: Backend capability queries (devices, dtypes, features)
- TlProfiledExecutor: Execution profiling and performance analysis
Optimization Infrastructure
- GraphOptimizer: Fusion detection, dead node elimination, redundancy analysis
- FusionPlanner: Planning and validation of operation fusion
- Scheduler: Execution scheduling (sequential, parallel, cost-based)
- PlacementOptimizer: Multi-device placement and coordination
- GraphCompiler: AOT graph compilation with multiple optimization levels
- CompilationCache: Caching of compiled graphs to avoid recompilation
- MemoryEstimator: Memory usage estimation and lifetime analysis
- ShapeInferenceContext: Tensor shape inference for optimization
Runtime Utilities
- TensorCache: Result caching with LRU/FIFO/LFU eviction
- MemoryPool: Tensor memory pooling for allocation reuse
- ExecutionStrategy: Complete strategy configuration
- ExecutionContext: State management with lifecycle hooks
- GraphValidator: Graph validation and diagnostics
Testing & Development Tools 🆕
- BackendTestAdapter: Comprehensive test templates for backend validation
- GradientChecker: Numerical gradient checking for autodiff verification
- PerfRegression: Performance regression testing with baseline comparison
- Variable & EagerTape: Eager mode execution with gradient tracking
Quick Start
use ;
use Scirs2Exec;
use EinsumGraph;
// Create executor
let mut executor = new;
// Forward pass
let outputs = executor.forward?;
// Backward pass
executor.backward?;
let param_grads = executor.get_gradients?;
Core Traits
TlExecutor
Basic execution interface for forward passes:
TlAutodiff
Automatic differentiation support:
TlBatchExecutor
Efficient batch execution with parallel support:
TlStreamingExecutor
Streaming execution for large datasets:
Streaming Modes:
use ;
// Fixed chunk size
let config = new
.with_prefetch
.with_checkpointing;
// Dynamic chunk sizing based on memory
let config = new;
// Adaptive chunking based on performance
let config = new;
TlCapabilities
Query backend capabilities:
// Example usage
let caps = executor.capabilities;
println!;
println!;
println!;
TlProfiledExecutor
Execution profiling and performance analysis:
// Example usage
executor.enable_profiling;
executor.execute?;
let profile = executor.get_profile_data;
for in &profile.op_profiles
Optimization Utilities
GraphOptimizer
Analyze and optimize computation graphs:
use ;
let optimizer = new;
let result: OptimizationResult = optimizer.analyze;
println!;
println!;
println!;
FusionPlanner
Plan operation fusion:
use ;
let planner = new;
let opportunities = planner.find_fusion_opportunities;
for opp in &opportunities
Scheduler
Execution scheduling with multiple strategies:
use ;
let scheduler = new;
let schedule = scheduler.schedule?;
println!;
println!;
Scheduling Strategies:
Sequential: Simple topological orderParallel: Maximize parallelism across independent nodesCostBased: Balance parallelism with execution cost
PlacementOptimizer
Multi-device placement optimization:
use ;
let devices = vec!;
let optimizer = new;
let plan = optimizer.optimize?;
for in &plan.node_placements
Memory Management
TensorCache: Cache computation results
use ;
let mut cache = new; // 1000 MB limit
// Cache usage is automatic when integrated with executor
cache.insert;
if let Some = cache.get
MemoryPool: Reuse tensor allocations
use MemoryPool;
let mut pool = new;
// Allocate or reuse
let tensor = pool.allocate?;
// Return to pool
pool.deallocate;
// Stats
let stats = pool.stats;
println!;
ExecutionStrategy
Configure complete execution strategy:
use ;
let strategy = ExecutionStrategy ;
let optimizer = new;
let optimized = optimizer.optimize_for_throughput;
ExecutionContext
Manage execution state with lifecycle hooks:
use ;
let mut context = new;
context.add_hook;
context.notify;
context.notify;
context.notify;
Validation and Analysis
GraphValidator
Validate computation graphs:
use GraphValidator;
let validator = new;
let result = validator.validate;
if !result.is_valid
MemoryEstimator
Estimate memory usage:
use MemoryEstimator;
let estimator = new;
let estimate = estimator.estimate;
println!;
println!;
ShapeInferenceContext
Infer tensor shapes:
use ShapeInferenceContext;
let mut ctx = new;
ctx.set_input_shape;
let inferred = ctx.infer_shapes?;
for in &inferred
Debugging Tools
ExecutionTracer
Record and analyze execution flow:
use ExecutionTracer;
let mut tracer = new;
tracer.enable;
tracer.start_trace;
// Execute operations...
let handle = tracer.record_operation_start;
// ... operation execution ...
tracer.record_operation_end;
// Get trace
let trace = tracer.get_trace;
let summary = trace.summary;
println!;
println!;
// Find slowest operations
let slowest = trace.slowest_operations;
for entry in slowest
TensorInspector
Examine intermediate tensor values:
use ;
let mut inspector = new;
inspector.enable;
inspector.watch; // Watch specific tensor
// Record statistics
let stats = new
.with_statistics;
inspector.record_stats;
// Check for numerical issues
let problematic = inspector.find_problematic_tensors;
for tensor in problematic
BreakpointManager
Pause execution for debugging:
use ;
let mut breakpoints = new;
breakpoints.enable;
// Add various breakpoint types
breakpoints.add_node_breakpoint;
breakpoints.add_operation_breakpoint;
breakpoints.add_numerical_issue_breakpoint;
breakpoints.add_time_threshold_breakpoint; // 5ms
// Check during execution
if let Some = breakpoints.should_break
ExecutionRecorder
Full execution recording for replay:
use ExecutionRecorder;
let mut recorder = new;
recorder.enable;
// All debugging features enabled
recorder.tracer.start_trace;
recorder.inspector.watch;
recorder.breakpoints.add_node_breakpoint;
// Generate comprehensive report
let report = recorder.generate_report;
println!;
Advanced Profiling
TimelineProfiler
Create detailed execution timelines:
use ;
let mut profiler = new;
let hook = new;
// Attach to context
context.add_hook;
// Execute
executor.execute?;
// Analyze timeline
let entries = profiler.entries;
for entry in entries
BottleneckAnalyzer
Identify performance bottlenecks:
use BottleneckAnalyzer;
let analyzer = new;
let report = analyzer.analyze;
println!;
for bottleneck in &report.bottlenecks
println!;
for rec in &report.recommendations
PerformanceComparison
Compare execution strategies:
use PerformanceComparison;
let baseline = from_profile;
let comparison = new;
println!;
println!;
Testing Support
DummyExecutor
Minimal executor for testing:
use DummyExecutor;
let executor = new;
let outputs = executor.execute?;
// Returns empty outputs for testing
Examples
Basic Execution
use TlExecutor;
use Scirs2Exec;
use HashMap;
let executor = new;
let mut inputs = new;
inputs.insert;
let outputs = executor.execute?;
Batch Processing
use TlBatchExecutor;
let batch_inputs = vec!;
let result = executor.execute_batch_parallel?;
println!;
println!;
Streaming Large Datasets
use ;
let config = new.with_prefetch;
let results = executor.execute_stream?;
for result in results
Training with Autodiff
use TlAutodiff;
// Forward pass
let outputs = executor.forward?;
// Compute loss gradients
let loss_grads = compute_loss_gradients;
// Backward pass
executor.backward?;
// Get parameter gradients
let grads = executor.get_gradients?;
// Update parameters
for in grads
Architecture
tensorlogic-infer
├── Core Traits
│ ├── TlExecutor (basic execution)
│ ├── TlAutodiff (training)
│ ├── TlBatchExecutor (batching)
│ ├── TlStreamingExecutor (streaming)
│ ├── TlCapabilities (queries)
│ └── TlProfiledExecutor (profiling)
├── Optimization
│ ├── GraphOptimizer (analysis)
│ ├── FusionPlanner (fusion)
│ ├── Scheduler (ordering)
│ └── PlacementOptimizer (devices)
├── Runtime
│ ├── TensorCache (caching)
│ ├── MemoryPool (pooling)
│ ├── ExecutionStrategy (config)
│ └── ExecutionContext (state)
└── Analysis
├── GraphValidator (validation)
├── MemoryEstimator (memory)
├── ShapeInferenceContext (shapes)
└── BottleneckAnalyzer (perf)
Integration with Other Crates
tensorlogic-scirs-backend: Reference implementation using SciRS2
use Scirs2Exec;
let executor = new;
tensorlogic-train: Training infrastructure
use ;
let trainer = new;
tensorlogic-compiler: Compile TLExpr to EinsumGraph
use compile;
let graph = compile?;
let outputs = executor.execute?;
Performance Considerations
Optimization Checklist
- Enable fusion for consecutive operations
- Use batch execution for multiple inputs
- Enable memory pooling to reduce allocations
- Use streaming for large datasets that don't fit in memory
- Profile execution to identify bottlenecks
- Optimize placement for multi-device execution
- Cache results for repeated computations
Benchmarking
Testing
# Run all tests
# Run with output
# Run specific test
Test Coverage: 189 tests covering all traits and utilities (100% passing)
Contributing
See CONTRIBUTING.md for guidelines.
License
Apache-2.0
Status: 🎉 Production Ready (v0.1.0-alpha.1) Last Updated: 2025-11-06 Tests: 189 passing (100%) Completeness: ~95% New Features: Comprehensive debugging tools (ExecutionTracer, TensorInspector, BreakpointManager) Part of: TensorLogic Ecosystem