pub enum MultiAD {
Show 13 variants
Inp,
Add,
Sub,
Mul,
Div,
Pow,
Sin,
Cos,
Tan,
Exp,
Ln,
Sqrt,
Abs,
}Expand description
Multi-variable automatic differentiation operations.
Represents operations in a computational graph for functions with multiple inputs. Each operation takes references to previous results via indices.
§Examples
use petite_ad::{MultiAD, multi_ops};
// Build graph: f(x, y) = sin(x) * (x + y)
let exprs = multi_ops![
(inp, 0), // x at index 0
(inp, 1), // y at index 1
(add, 0, 1), // x + y at index 2
(sin, 0), // sin(x) at index 3
(mul, 2, 3), // sin(x) * (x + y) at index 4
];
let (value, grad_fn) = MultiAD::compute_grad(&exprs, &[0.6, 1.4]).unwrap();
let gradients = grad_fn(1.0);
println!("f(0.6, 1.4) = {}", value);
println!("∇f = {:?}", gradients);Variants§
Inp
Input placeholder - references an input variable
Add
Addition: a + b
Sub
Subtraction: a - b
Mul
Multiplication: a * b
Div
Division: a / b
§Notes
- Delegates to
f64::div(), which returnsinffor division by zero - Returns
NaNfor0.0 / 0.0
Pow
Power: a^b (a raised to the power of b)
§Notes
- Delegates to
f64::powf() - For
x^nwhere n is an integer, consider using repeated multiplication
Sin
Sine function: sin(x)
§Notes
- Delegates to
f64::sin(), which operates in radians - Returns values in the range
[-1.0, 1.0]
Cos
Cosine function: cos(x)
§Notes
- Delegates to
f64::cos(), which operates in radians - Returns values in the range
[-1.0, 1.0]
Tan
Tangent function: tan(x)
§Notes
- Delegates to
f64::tan(), which operates in radians - Returns very large values near
π/2 + kπ(asymptotes)
Exp
Exponential function: exp(x)
§Notes
- Delegates to
f64::exp() - Returns
inffor very large inputs (> ~709 for f64) - Returns
0.0for very large negative inputs (< ~-745 for f64)
Ln
Natural logarithm: ln(x)
§Notes
- Delegates to
f64::ln() - Returns
NaNfor negative inputs - Returns
-infforln(0.0)
Sqrt
Abs
Absolute value: abs(x)
§Notes
- Delegates to
f64::abs() - Subgradient at x=0 is 0 (consistent with common practice)
Implementations§
Source§impl MultiAD
impl MultiAD
Sourcepub fn compute(exprs: &[(MultiAD, Vec<usize>)], inputs: &[f64]) -> Result<f64>
pub fn compute(exprs: &[(MultiAD, Vec<usize>)], inputs: &[f64]) -> Result<f64>
Compute forward pass only (no gradient computation).
Evaluates the computational graph to produce the final output value.
§Arguments
exprs- Slice of (operation, indices) pairs defining the computation graphinputs- Input values for the function
§Errors
Returns Err(AutodiffError) if an operation receives incorrect arity.
§Examples
use petite_ad::{MultiAD, multi_ops};
let exprs = multi_ops![(inp, 0), (inp, 1), (add, 0, 1)];
let result = MultiAD::compute(&exprs, &[2.0, 3.0]).unwrap();
assert!((result - 5.0).abs() < 1e-10);Sourcepub fn compute_grad_generic<W>(
exprs: &[(MultiAD, Vec<usize>)],
inputs: &[f64],
) -> Result<(f64, W)>
pub fn compute_grad_generic<W>( exprs: &[(MultiAD, Vec<usize>)], inputs: &[f64], ) -> Result<(f64, W)>
Compute forward pass and return gradient function.
Returns a tuple of (value, gradient_function). The gradient function takes a cotangent (typically 1.0) and returns a vector of gradients with respect to each input.
The result is Box-wrapped by default. If you need Arc for sharing across threads,
convert using Arc::from(box_fn).
§Arguments
exprs- Computational graph as (operation, indices) pairsinputs- Input values to evaluate at
§Returns
Tuple of (output_value, gradient_function)
§Errors
Returns Err(AutodiffError) if an operation receives incorrect arity.
§Examples
use petite_ad::{MultiAD, multi_ops};
use std::sync::Arc;
let exprs = multi_ops![
(inp, 0), (inp, 1),
(add, 0, 1), (sin, 0), (mul, 2, 3)
];
let (value, grad_fn) = MultiAD::compute_grad(&exprs, &[0.6, 1.4]).unwrap();
let gradients = grad_fn(1.0);
// Convert to Arc if needed for sharing
let arc_grad_fn: Arc<dyn Fn(f64) -> Vec<f64>> = Arc::from(grad_fn);