Expand description
Automatic differentiation engine for ToRSh
This crate provides a PyTorch-compatible autograd API that fully leverages scirs2-autograd’s powerful automatic differentiation capabilities.
§Quick Start
ⓘ
use torsh_autograd::prelude::*;
// Enable gradient computation
let x = tensor::ones(&[2, 3]).requires_grad_(true);
let y = x.pow(2).sum();
// Compute gradients
y.backward();
let grad = x.grad();§Architecture
The autograd system is built around several key components:
- Gradient computation: Automatic computation of gradients through computation graphs
- Tensor operations: Differentiable tensor operations with gradient tracking
- Variable management: Thread-local variable environments for gradient storage
- Guard system: RAII guards for gradient mode management
- Anomaly detection: Detection and recovery from numerical anomalies
- SciRS2 integration: Deep integration with the SciRS2 autograd system
- Hardware acceleration: Multi-platform support (CUDA, Metal, WebGPU)
§API Stability
The crate follows semantic versioning with clearly defined stability levels:
- Stable APIs (
stable_api::stable): Core functionality with backward compatibility - Beta APIs (
stable_api::beta): Feature-complete but may evolve - Experimental APIs (
stable_api::experimental): May change significantly
See stable_api for details on stability guarantees.
§Examples
The examples module provides comprehensive usage examples:
- Basic gradient computation
- Inference with
no_grad - Gradient accumulation
- Custom differentiable functions
- Higher-order gradients
- Hardware acceleration
- Distributed training
Run all examples: examples::run_all_examples()
§Key Modules
§Core Autograd
autograd_traits: Core traits for differentiable tensorsgradient_storage: Thread-safe gradient storage managementgrad_mode: Global gradient computation mode managementguards: RAII guards for automatic gradient mode restorationvariable_env: Thread-local variable environment management
§Advanced Features
complex_ops: Complex number operations with Wirtinger derivativesanomaly_detection: Numerical anomaly detection and automatic recoverygradient_clipping: Gradient clipping strategiescheckpoint_scheduler: Memory-efficient gradient checkpointinghigher_order_gradients: Higher-order derivative computation
§Hardware & Performance
hardware_acceleration: Multi-platform hardware accelerationprofiler: Performance profiling and analysissimd_ops: SIMD-optimized gradient operationsdistributed: Distributed gradient computation
§Integration & Compatibility
pytorch_compat: PyTorch compatibility layerjax_transformations: JAX-style transformationstensorflow_compat: TensorFlow compatibilitymlx_compat: Apple MLX compatibility
§Feature Flags
default: Enables autograd, SIMD, and parallel featuresautograd: SciRS2 autograd integrationsimd: SIMD optimizationsparallel: Parallel gradient computationgpu: GPU acceleration supportwebgpu: WebGPU for browser deploymentprofiling: Performance profiling toolsscirs2-full: All SciRS2 features
Re-exports§
pub use crate::error_handling::AutogradError;pub use crate::error_handling::AutogradResult;pub use crate::autograd_traits::AutogradTensor;pub use crate::autograd_traits::AutogradTensorFactory;pub use crate::autograd_traits::BackwardTensor;pub use crate::autograd_traits::GradientAccumulation;pub use crate::global_adapter::backward_global;pub use crate::global_adapter::create_gradient_tensor;pub use crate::global_adapter::get_global_adapter;pub use crate::global_adapter::get_gradient_global;pub use crate::grad_mode::is_grad_enabled;pub use crate::grad_mode::pop_grad_enabled;pub use crate::grad_mode::push_grad_enabled;pub use crate::grad_mode::set_grad_enabled;pub use crate::grad_mode::with_grad_mode;pub use crate::guards::enable_grad;pub use crate::guards::no_grad;pub use crate::guards::EnableGradGuard;pub use crate::guards::GradModeGuard;pub use crate::guards::NoGradGuard;pub use crate::gradient_storage::get_gradient_storage;pub use crate::gradient_storage::GlobalGradientStorage;pub use crate::gradient_storage::GradientStorage;pub use crate::gradient_storage::HashMapGradientStorage;pub use crate::variable_env::clear_variable_env;pub use crate::variable_env::get_or_create_variable_env;pub use crate::variable_env::handle_inplace_operation;pub use crate::variable_env::is_variable_env_initialized;pub use crate::variable_env::validate_inplace_operation;pub use crate::variable_env::with_variable_env;pub use crate::variable_env::InplaceConfig;pub use crate::variable_env::InplaceStrategy;pub use crate::complex_ops::backward_complex;pub use crate::pytorch_compat::backward;pub use crate::anomaly_detection::detect_complex_anomalies;pub use crate::anomaly_detection::recovery::AnomalyRecoverySystem;pub use crate::anomaly_detection::recovery::RecoveryConfig;pub use crate::anomaly_detection::recovery::RecoveryResult;pub use crate::anomaly_detection::recovery::RecoveryStats;pub use crate::anomaly_detection::recovery::RecoveryStrategy;pub use crate::scirs2_integration::GradientTensor;pub use crate::scirs2_integration::SciRS2AutogradAdapter;pub use crate::auto_tuning::AppliedOptimization;pub use crate::auto_tuning::AutoTuningController;pub use crate::auto_tuning::OptimizationType;pub use crate::auto_tuning::ParameterValue;pub use crate::auto_tuning::PerformanceSnapshot;pub use crate::auto_tuning::TuningConfig;pub use crate::auto_tuning::TuningRecommendation;pub use crate::auto_tuning::TuningStatistics;pub use crate::error_diagnostics::DiagnosticRecommendation;pub use crate::error_diagnostics::DiagnosticReport;pub use crate::error_diagnostics::DiagnosticStatus;pub use crate::error_diagnostics::DiagnosticsConfig;pub use crate::error_diagnostics::ErrorCorrelation;pub use crate::error_diagnostics::ErrorDiagnosticsSystem;pub use crate::error_diagnostics::ErrorPattern;pub use crate::error_diagnostics::LabeledErrorEvent;pub use crate::error_diagnostics::MLAnalysisResult;pub use crate::error_diagnostics::MLPatternPrediction;pub use crate::error_diagnostics::MLPatternRecognitionSystem;pub use crate::error_diagnostics::MLSystemConfig;pub use crate::error_diagnostics::PatternLabel;pub use crate::error_diagnostics::SeverityLevel;pub use crate::error_diagnostics::TemporalContext;
Modules§
- accumulate
- Accumulate gradients with overflow protection
- ad_
framework_ compatibility - Compatibility Layers for Different AD Frameworks
- anomaly_
alerts - Anomaly Alerting System for Autograd
- anomaly_
detection - Anomaly detection and automatic recovery for gradient computation
- audit_
logging - Autograd Operation Audit Logging
- auto_
tuning - Automatic Performance Tuning for Autograd Operations
- autograd_
traits - Core traits for automatic differentiation tensors
- automatic_
error_ recovery - Automatic error recovery for transient failures in autograd operations
- blas_
integration - BLAS Integration for Efficient Linear Algebra Gradients
- buffer_
optimization - Temporary buffer allocation optimization for autograd operations
- capacity_
planning - Capacity Planning for Autograd Workloads
- checkpoint_
scheduler - Checkpoint scheduling for optimal memory-compute trade-offs
- clip
- Gradient clipping utilities
- common_
utils - Common Utilities for Autograd
- communication_
efficient - Communication-Efficient Distributed Training Framework
- complex_
ops - Complex number operations with automatic differentiation support
- compression
- Advanced gradient compression techniques for memory-limited environments
- context
- Autograd context system with computation graph management
- cross_
framework_ verification - Cross-Framework Gradient Verification
- custom_
backends - Custom Autograd Backend Interface
- differentiable_
programming - discrete_
ops - Automatic differentiation through discrete operations
- distributed
- Distributed autograd operations for large-scale training
- edge_
case_ handling - Robust edge case handling for autograd operations
- error_
diagnostics - Advanced Error Diagnostics and Analysis
- error_
handling - Enhanced error handling and propagation for autograd operations
- error_
rate_ monitoring - Error Rate Monitoring and Alerting
- examples
- Comprehensive Examples for torsh-autograd
- exception_
safety - Exception Safety for Autograd Operations
- external_
ad_ integration - External Automatic Differentiation Library Integration Framework
- federated_
learning - Federated Learning Framework for ToRSh
- flamegraph
- Flamegraph Generation for Autograd Operations
- flamegraph_
generation - Flamegraph Generation for Autograd Operations
- forward_
mode - Forward-mode automatic differentiation
- function
- Enhanced autograd function framework with custom function support
- function_
optimization - Function optimization and fusion framework
- garbage_
collection - Automatic garbage collection for unused gradients and computation graph nodes
- global_
adapter - Global SciRS2 autograd adapter for unified gradient computation
- gpu_
gradient - GPU-Accelerated Gradient Computation using SciRS2-Core
- graceful_
degradation - Graceful Degradation for Unsupported Operations
- grad_
mode - Gradient computation mode management
- gradient_
checking - Comprehensive gradient checking utilities and tests
- gradient_
clipping - Advanced Gradient Clipping Utilities
- gradient_
filtering - Advanced gradient filtering and smoothing techniques for robust training
- gradient_
flow_ analysis - Gradient Flow Analysis and Reporting
- gradient_
hooks - Gradient Hooks System
- gradient_
scaling - Gradient scaling strategies for different optimizers
- gradient_
scheduler - Gradient computation scheduling optimization
- gradient_
storage - Gradient storage management for automatic differentiation
- gradient_
tracer - Gradient Computation Path Tracing
- gradient_
tracing - Gradient Computation Tracing
- gradient_
validation - Comprehensive gradient validation for shape and type checking
- graph_
opt - Computation Graph Optimization Framework
- graph_
visualization - Computation Graph Visualization
- guards
- RAII guard implementations for gradient mode management
- hardware_
acceleration - Hardware-Specific Autograd Acceleration
- health_
diagnostics - Autograd Health Checks and Diagnostics
- higher_
order_ gradients - Higher-Order Gradient Computation
- hyperparameter_
optimization - Gradient-based hyperparameter optimization for automatic tuning of learning rates, regularization parameters, and other hyperparameters using differentiation.
- inplace_
versioning - In-place operation handling with gradient safety
- integration_
patterns - Integration Patterns and Best Practices
- intelligent_
chunking - Intelligent Chunking System for Gradient Computation using SciRS2-Core
- interactive_
debugger - Interactive Gradient Computation Debugger
- iterative_
solvers - Automatic differentiation through iterative solvers
- jax_
transformations - JAX-style transformations for automatic differentiation
- matrix_
calculus - Matrix calculus operations for automatic differentiation
- memory
- Adaptive memory management for autograd operations
- meta_
gradient - metrics_
collection - Comprehensive metrics collection for gradient statistics and monitoring
- mlx_
compat - MLX (Apple Machine Learning Framework) Compatibility Layer
- neural_
architecture_ search - Differentiable Neural Architecture Search (DNAS) support
- neural_
ode - Neural ODE integration with automatic differentiation
- onnx_
integration - operation_
cost_ analysis - Operation Cost Analysis
- operation_
introspection - Operation Introspection Tools
- operation_
replay - Autograd Operation Replay and Analysis
- optimization_
diff - Automatic differentiation through optimization problems
- parallel_
gradient - Parallel Gradient Computation using SciRS2-Core
- parameter_
server - performance_
dashboard - Performance Dashboards for Autograd Analysis
- performance_
regression - Performance Regression Detection
- prelude
- Public prelude for convenient importing
- profiler
- Performance profiling for autograd operations
- profiling_
debugging_ integration - Profiling and Debugging Tools Integration
- progress_
reporting - Gradient Computation Progress Reporting
- property_
testing - Property-based testing for autograd operations
- pytorch_
compat - PyTorch Autograd Compatibility Layer
- quantum_
autograd - Quantum computing autograd extensions
- raii_
resources - Auto-generated module structure
- regression_
testing - Regression testing for gradient computation
- scirs2_
integration - SciRS2 Integration Abstraction Layer
- scirs2_
integration_ testing - SciRS2 Integration Testing Framework
- simd_
gradient - SIMD-Accelerated Gradient Computation using SciRS2-Core
- simd_
ops - SIMD optimized gradient operations for high-performance automatic differentiation
- specialized_
gradient_ libs - Integration with Specialized Gradient Computation Libraries
- stable_
api - Stable API Surface for torsh-autograd
- staleness_
handling - stochastic_
graphs - Stochastic computation graphs for probabilistic programming
- stress_
testing - Stress testing module for large computation graphs
- structured_
logging - Structured logging for autograd operations
- symbolic
- Symbolic differentiation for simple expressions
- tensorflow_
compat - variable_
env - Variable environment management for automatic differentiation
- visualization
- Autograd Gradient Flow Visualization System
- vjp_
optimization - Vector-Jacobian Product (VJP) optimizations for efficient reverse-mode automatic differentiation
Macros§
- autograd_
error - Macros for convenient error creation and propagation
- autograd_
guard - Convenience macro for creating RAII guards
- autograd_
propagate - Macro for propagating autograd errors with additional context
- autograd_
scope - Convenience macro for creating RAII scoped autograd operations
- define_
custom_ function - Helper macro for defining custom functions
- gradient_
test_ case - Convenience macro for creating test cases
- log_
autograd - Convenience macros for logging
- temp_
buffer - Macro for convenient temporary buffer allocation
- trace_
gradient_ path - Macro for easy gradient path tracing
- with_
error_ recovery - Macro for easy error recovery
- with_
graceful_ degradation - Convenience macro for executing operations with graceful degradation
- with_
no_ throw - Convenience macro for executing operations with no-throw guarantee
- with_
strong_ safety - Convenience macro for executing operations with strong exception safety
- zero_
overhead_ op - Macro for zero-overhead tensor operations
Structs§
- Tensor
Version - Version tracking for tensor operations
Constants§
Functions§
- new_
tensor_ id - Generate a unique tensor ID