Expand description
Axonml Autograd - Automatic Differentiation Engine
Provides reverse-mode automatic differentiation for computing gradients of tensor operations. This is the foundation for training neural networks using gradient descent optimization.
§Key Features
- Dynamic Computational Graph - Build graph during forward pass
- Reverse-mode Autodiff - Efficient backpropagation
- Gradient Accumulation - Support for gradient accumulation across batches
- No-grad Context - Disable gradient tracking for inference
- Automatic Mixed Precision (AMP) - F16 autocast for faster training
- Gradient Checkpointing - Trade compute for memory on large models
§Basic Example
ⓘ
use axonml_autograd::{Variable, no_grad};
// Create variables with gradient tracking
let x = Variable::new(tensor, true); // requires_grad = true
let w = Variable::new(weights, true);
// Forward pass builds computational graph
let y = x.matmul(&w);
let loss = y.mse_loss(&target);
// Backward pass computes gradients
loss.backward();
// Access gradients
println!("dL/dw = {:?}", w.grad());§Mixed Precision Training
ⓘ
use axonml_autograd::amp::{autocast, AutocastGuard};
use axonml_core::DType;
// Enable F16 autocast for forward pass
let output = autocast(DType::F16, || {
model.forward(&input)
});
// Or use RAII guard
{
let _guard = AutocastGuard::new(DType::F16);
let output = model.forward(&input);
}§Gradient Checkpointing
ⓘ
use axonml_autograd::checkpoint::{checkpoint, checkpoint_sequential};
// Checkpoint a single function - recomputes during backward
let output = checkpoint(|x| heavy_computation(x), &input);
// Checkpoint sequential layers in segments
let output = checkpoint_sequential(24, 4, &input, |layer_idx, x| {
layers[layer_idx].forward(x)
});@version 0.2.6
@author AutomataNexus Development Team
Re-exports§
pub use amp::autocast;pub use amp::autocast_dtype;pub use amp::disable_autocast;pub use amp::is_autocast_enabled;pub use amp::AutocastGuard;pub use amp::AutocastPolicy;pub use backward::backward;pub use checkpoint::checkpoint;pub use checkpoint::checkpoint_sequential;pub use grad_fn::GradFn;pub use grad_fn::GradientFunction;pub use graph::ComputationGraph;pub use graph::GraphNode;pub use no_grad::no_grad;pub use no_grad::NoGradGuard;pub use variable::Variable;
Modules§
- amp
- Automatic Mixed Precision (AMP) Support
- backward
- Backward Pass - Gradient Computation
- checkpoint
- Gradient Checkpointing - Memory-Efficient Training
- functions
- Differentiable Functions - Gradient Implementations
- grad_fn
- Gradient Function Traits - Differentiable Operation Interface
- graph
- Computational Graph - Dynamic Graph Construction
- no_grad
- No-Grad Context - Disable Gradient Computation
- prelude
- Convenient imports for common autograd usage.
- variable
- Variable - Tensor with Gradient Tracking