Skip to main content

Crate torsh_autograd

Crate torsh_autograd 

Source
Expand description

Automatic differentiation engine for ToRSh

This crate provides a PyTorch-compatible autograd API that fully leverages scirs2-autograd’s powerful automatic differentiation capabilities.

§Quick Start

use torsh_autograd::prelude::*;

// Enable gradient computation
let x = tensor::ones(&[2, 3]).requires_grad_(true);
let y = x.pow(2).sum();

// Compute gradients
y.backward();
let grad = x.grad();

§Architecture

The autograd system is built around several key components:

  • Gradient computation: Automatic computation of gradients through computation graphs
  • Tensor operations: Differentiable tensor operations with gradient tracking
  • Variable management: Thread-local variable environments for gradient storage
  • Guard system: RAII guards for gradient mode management
  • Anomaly detection: Detection and recovery from numerical anomalies
  • SciRS2 integration: Deep integration with the SciRS2 autograd system
  • Hardware acceleration: Multi-platform support (CUDA, Metal, WebGPU)

§API Stability

The crate follows semantic versioning with clearly defined stability levels:

See stable_api for details on stability guarantees.

§Examples

The examples module provides comprehensive usage examples:

  • Basic gradient computation
  • Inference with no_grad
  • Gradient accumulation
  • Custom differentiable functions
  • Higher-order gradients
  • Hardware acceleration
  • Distributed training

Run all examples: examples::run_all_examples()

§Key Modules

§Core Autograd

§Advanced Features

§Hardware & Performance

§Integration & Compatibility

§Feature Flags

  • default: Enables autograd, SIMD, and parallel features
  • autograd: SciRS2 autograd integration
  • simd: SIMD optimizations
  • parallel: Parallel gradient computation
  • gpu: GPU acceleration support
  • webgpu: WebGPU for browser deployment
  • profiling: Performance profiling tools
  • scirs2-full: All SciRS2 features

Re-exports§

pub use crate::error_handling::AutogradError;
pub use crate::error_handling::AutogradResult;
pub use crate::autograd_traits::AutogradTensor;
pub use crate::autograd_traits::AutogradTensorFactory;
pub use crate::autograd_traits::BackwardTensor;
pub use crate::autograd_traits::GradientAccumulation;
pub use crate::global_adapter::backward_global;
pub use crate::global_adapter::create_gradient_tensor;
pub use crate::global_adapter::get_global_adapter;
pub use crate::global_adapter::get_gradient_global;
pub use crate::grad_mode::is_grad_enabled;
pub use crate::grad_mode::pop_grad_enabled;
pub use crate::grad_mode::push_grad_enabled;
pub use crate::grad_mode::set_grad_enabled;
pub use crate::grad_mode::with_grad_mode;
pub use crate::guards::enable_grad;
pub use crate::guards::no_grad;
pub use crate::guards::EnableGradGuard;
pub use crate::guards::GradModeGuard;
pub use crate::guards::NoGradGuard;
pub use crate::gradient_storage::get_gradient_storage;
pub use crate::gradient_storage::GlobalGradientStorage;
pub use crate::gradient_storage::GradientStorage;
pub use crate::gradient_storage::HashMapGradientStorage;
pub use crate::variable_env::clear_variable_env;
pub use crate::variable_env::get_or_create_variable_env;
pub use crate::variable_env::handle_inplace_operation;
pub use crate::variable_env::is_variable_env_initialized;
pub use crate::variable_env::validate_inplace_operation;
pub use crate::variable_env::with_variable_env;
pub use crate::variable_env::InplaceConfig;
pub use crate::variable_env::InplaceStrategy;
pub use crate::complex_ops::backward_complex;
pub use crate::pytorch_compat::backward;
pub use crate::anomaly_detection::detect_complex_anomalies;
pub use crate::anomaly_detection::recovery::AnomalyRecoverySystem;
pub use crate::anomaly_detection::recovery::RecoveryConfig;
pub use crate::anomaly_detection::recovery::RecoveryResult;
pub use crate::anomaly_detection::recovery::RecoveryStats;
pub use crate::anomaly_detection::recovery::RecoveryStrategy;
pub use crate::scirs2_integration::GradientTensor;
pub use crate::scirs2_integration::SciRS2AutogradAdapter;
pub use crate::auto_tuning::AppliedOptimization;
pub use crate::auto_tuning::AutoTuningController;
pub use crate::auto_tuning::OptimizationType;
pub use crate::auto_tuning::ParameterValue;
pub use crate::auto_tuning::PerformanceSnapshot;
pub use crate::auto_tuning::TuningConfig;
pub use crate::auto_tuning::TuningRecommendation;
pub use crate::auto_tuning::TuningStatistics;
pub use crate::error_diagnostics::DiagnosticRecommendation;
pub use crate::error_diagnostics::DiagnosticReport;
pub use crate::error_diagnostics::DiagnosticStatus;
pub use crate::error_diagnostics::DiagnosticsConfig;
pub use crate::error_diagnostics::ErrorCorrelation;
pub use crate::error_diagnostics::ErrorDiagnosticsSystem;
pub use crate::error_diagnostics::ErrorPattern;
pub use crate::error_diagnostics::LabeledErrorEvent;
pub use crate::error_diagnostics::MLAnalysisResult;
pub use crate::error_diagnostics::MLPatternPrediction;
pub use crate::error_diagnostics::MLPatternRecognitionSystem;
pub use crate::error_diagnostics::MLSystemConfig;
pub use crate::error_diagnostics::PatternLabel;
pub use crate::error_diagnostics::SeverityLevel;
pub use crate::error_diagnostics::TemporalContext;

Modules§

accumulate
Accumulate gradients with overflow protection
ad_framework_compatibility
Compatibility Layers for Different AD Frameworks
anomaly_alerts
Anomaly Alerting System for Autograd
anomaly_detection
Anomaly detection and automatic recovery for gradient computation
audit_logging
Autograd Operation Audit Logging
auto_tuning
Automatic Performance Tuning for Autograd Operations
autograd_traits
Core traits for automatic differentiation tensors
automatic_error_recovery
Automatic error recovery for transient failures in autograd operations
blas_integration
BLAS Integration for Efficient Linear Algebra Gradients
buffer_optimization
Temporary buffer allocation optimization for autograd operations
capacity_planning
Capacity Planning for Autograd Workloads
checkpoint_scheduler
Checkpoint scheduling for optimal memory-compute trade-offs
clip
Gradient clipping utilities
common_utils
Common Utilities for Autograd
communication_efficient
Communication-Efficient Distributed Training Framework
complex_ops
Complex number operations with automatic differentiation support
compression
Advanced gradient compression techniques for memory-limited environments
context
Autograd context system with computation graph management
cross_framework_verification
Cross-Framework Gradient Verification
custom_backends
Custom Autograd Backend Interface
differentiable_programming
discrete_ops
Automatic differentiation through discrete operations
distributed
Distributed autograd operations for large-scale training
edge_case_handling
Robust edge case handling for autograd operations
error_diagnostics
Advanced Error Diagnostics and Analysis
error_handling
Enhanced error handling and propagation for autograd operations
error_rate_monitoring
Error Rate Monitoring and Alerting
examples
Comprehensive Examples for torsh-autograd
exception_safety
Exception Safety for Autograd Operations
external_ad_integration
External Automatic Differentiation Library Integration Framework
federated_learning
Federated Learning Framework for ToRSh
flamegraph
Flamegraph Generation for Autograd Operations
flamegraph_generation
Flamegraph Generation for Autograd Operations
forward_mode
Forward-mode automatic differentiation
function
Enhanced autograd function framework with custom function support
function_optimization
Function optimization and fusion framework
garbage_collection
Automatic garbage collection for unused gradients and computation graph nodes
global_adapter
Global SciRS2 autograd adapter for unified gradient computation
gpu_gradient
GPU-Accelerated Gradient Computation using SciRS2-Core
graceful_degradation
Graceful Degradation for Unsupported Operations
grad_mode
Gradient computation mode management
gradient_checking
Comprehensive gradient checking utilities and tests
gradient_clipping
Advanced Gradient Clipping Utilities
gradient_filtering
Advanced gradient filtering and smoothing techniques for robust training
gradient_flow_analysis
Gradient Flow Analysis and Reporting
gradient_hooks
Gradient Hooks System
gradient_scaling
Gradient scaling strategies for different optimizers
gradient_scheduler
Gradient computation scheduling optimization
gradient_storage
Gradient storage management for automatic differentiation
gradient_tracer
Gradient Computation Path Tracing
gradient_tracing
Gradient Computation Tracing
gradient_validation
Comprehensive gradient validation for shape and type checking
graph_opt
Computation Graph Optimization Framework
graph_visualization
Computation Graph Visualization
guards
RAII guard implementations for gradient mode management
hardware_acceleration
Hardware-Specific Autograd Acceleration
health_diagnostics
Autograd Health Checks and Diagnostics
higher_order_gradients
Higher-Order Gradient Computation
hyperparameter_optimization
Gradient-based hyperparameter optimization for automatic tuning of learning rates, regularization parameters, and other hyperparameters using differentiation.
inplace_versioning
In-place operation handling with gradient safety
integration_patterns
Integration Patterns and Best Practices
intelligent_chunking
Intelligent Chunking System for Gradient Computation using SciRS2-Core
interactive_debugger
Interactive Gradient Computation Debugger
iterative_solvers
Automatic differentiation through iterative solvers
jax_transformations
JAX-style transformations for automatic differentiation
matrix_calculus
Matrix calculus operations for automatic differentiation
memory
Adaptive memory management for autograd operations
meta_gradient
metrics_collection
Comprehensive metrics collection for gradient statistics and monitoring
mlx_compat
MLX (Apple Machine Learning Framework) Compatibility Layer
neural_architecture_search
Differentiable Neural Architecture Search (DNAS) support
neural_ode
Neural ODE integration with automatic differentiation
onnx_integration
operation_cost_analysis
Operation Cost Analysis
operation_introspection
Operation Introspection Tools
operation_replay
Autograd Operation Replay and Analysis
optimization_diff
Automatic differentiation through optimization problems
parallel_gradient
Parallel Gradient Computation using SciRS2-Core
parameter_server
performance_dashboard
Performance Dashboards for Autograd Analysis
performance_regression
Performance Regression Detection
prelude
Public prelude for convenient importing
profiler
Performance profiling for autograd operations
profiling_debugging_integration
Profiling and Debugging Tools Integration
progress_reporting
Gradient Computation Progress Reporting
property_testing
Property-based testing for autograd operations
pytorch_compat
PyTorch Autograd Compatibility Layer
quantum_autograd
Quantum computing autograd extensions
raii_resources
Auto-generated module structure
regression_testing
Regression testing for gradient computation
scirs2_integration
SciRS2 Integration Abstraction Layer
scirs2_integration_testing
SciRS2 Integration Testing Framework
simd_gradient
SIMD-Accelerated Gradient Computation using SciRS2-Core
simd_ops
SIMD optimized gradient operations for high-performance automatic differentiation
specialized_gradient_libs
Integration with Specialized Gradient Computation Libraries
stable_api
Stable API Surface for torsh-autograd
staleness_handling
stochastic_graphs
Stochastic computation graphs for probabilistic programming
stress_testing
Stress testing module for large computation graphs
structured_logging
Structured logging for autograd operations
symbolic
Symbolic differentiation for simple expressions
tensorflow_compat
variable_env
Variable environment management for automatic differentiation
visualization
Autograd Gradient Flow Visualization System
vjp_optimization
Vector-Jacobian Product (VJP) optimizations for efficient reverse-mode automatic differentiation

Macros§

autograd_error
Macros for convenient error creation and propagation
autograd_guard
Convenience macro for creating RAII guards
autograd_propagate
Macro for propagating autograd errors with additional context
autograd_scope
Convenience macro for creating RAII scoped autograd operations
define_custom_function
Helper macro for defining custom functions
gradient_test_case
Convenience macro for creating test cases
log_autograd
Convenience macros for logging
temp_buffer
Macro for convenient temporary buffer allocation
trace_gradient_path
Macro for easy gradient path tracing
with_error_recovery
Macro for easy error recovery
with_graceful_degradation
Convenience macro for executing operations with graceful degradation
with_no_throw
Convenience macro for executing operations with no-throw guarantee
with_strong_safety
Convenience macro for executing operations with strong exception safety
zero_overhead_op
Macro for zero-overhead tensor operations

Structs§

TensorVersion
Version tracking for tensor operations

Constants§

VERSION
VERSION_MAJOR
VERSION_MINOR
VERSION_PATCH

Functions§

new_tensor_id
Generate a unique tensor ID