xad-rs
xad-rs is a Rust port of the C++ XAD
automatic differentiation library. It provides fast, ergonomic, and type-safe
forward and reverse mode automatic differentiation (AD) suitable for use in
scientific computing, quantitative finance, machine learning, optimization,
and any other setting where exact derivatives are required.
This is an unofficial, independent port and is not affiliated with the upstream XAD project. The original C++ XAD library is authored and maintained by the team at https://auto-differentiation.github.io.
Why XAD?
Automatic differentiation (a.k.a. algorithmic differentiation) computes
derivatives of arbitrary programs exactly, to machine precision,
without symbolic manipulation or finite-difference error. xad-rs exposes
several AD flavours so you can pick the cheapest mode for the problem at
hand:
| Type | Mode | Order | Seeds | Best for |
|---|---|---|---|---|
FReal<T> |
Forward (tangent-linear) | 1st | 1 direction | few inputs, many outputs |
Dual |
Forward, multi-variable | 1st | n directions in one pass |
full gradient in one forward sweep |
Dual2<T> |
Forward, second-order | 1st + 2nd | 1 direction | diagonal Hessian / gamma |
AReal<T> |
Reverse (adjoint), tape-based | 1st | 1 adjoint seed | many inputs, few outputs (gradients of scalar losses) |
Higher-level helpers:
jacobian::compute_jacobian_rev/compute_jacobian_fwd– full Jacobians via reverse or forward mode.hessian::compute_hessian– Hessians of scalar functions.checkpoint– memory-saving tape checkpointing for long reverse-mode sweeps.
Installation
Add xad-rs to your Cargo.toml:
[]
= "0.1"
Or track the development branch directly:
[]
= { = "https://github.com/sercanatalik/xad-rs" }
Minimum supported Rust version (MSRV): 1.85 (required by Rust edition 2024).
The library target is xad_rs (Cargo rewrites the - to _), so you
import it as:
use AReal;
use Dual;
use Dual2;
use FReal;
use Tape;
use ;
Quick start
Reverse mode (gradient of a scalar function)
use AReal;
use Tape;
// f(x, y) = x^2 * y + sin(x)
let mut tape = new;
tape.activate;
let mut x = new;
let mut y = new;
register_input;
register_input;
let mut f = & * &y + sin;
register_output;
f.set_adjoint;
tape.compute_adjoints;
println!;
println!; // 2xy + cos(x)
println!; // x^2
Forward mode, full gradient in one pass (Dual)
use Dual;
// f(x, y) = x^2 * y, at (3, 4)
let n = 2;
let x = variable;
let y = variable;
let f = & * &y;
assert_eq!;
assert_eq!; // df/dx = 2xy
assert_eq!; // df/dy = x^2
Second-order forward mode (Dual2)
use Dual2;
// f(x) = x^3 at x = 2
let x = variable;
let y = x * x * x;
assert_eq!;
assert_eq!; // 3x^2
assert_eq!; // 6x
Jacobian and Hessian helpers
use compute_jacobian_rev;
use compute_hessian;
// f: R^2 -> R^2, f(x, y) = [x*y, x + y]
let jac = compute_jacobian_rev;
// g: R^2 -> R, g(x, y) = x^2 * y + y^3
let hess = compute_hessian;
Labeled layer (labeled feature)
The labeled feature adds string-keyed wrappers around all four AD modes,
plus a labeled reverse-mode tape (LabeledTape) that returns gradients as
IndexMap<String, f64> keyed by the variable names you chose. Useful when
you want gradient readback by name (e.g. grad["spot"]) instead of
positional indices.
Enable it via Cargo features:
[]
= { = "0.2", = ["labeled"] }
Reverse-mode example, mirroring the f(x, y) = x²·y + sin(x) quick start
above but with labels:
use LabeledTape;
let mut tape = new;
let x = tape.input;
let y = tape.input;
let _registry = tape.freeze; // activates the inner Tape<f64>
// f(x, y) = x²·y + sin(x)
let f = & * &y + x.sin;
let grad = tape.gradient;
assert!;
assert!;
The two-phase contract is new() → input() for every named variable →
freeze() (activates the underlying tape) → forward closure → gradient().
Calling input() after freeze() panics; calling gradient() before
freeze() panics.
See also:
- Forward-mode equivalents:
LabeledFReal,LabeledDual,LabeledDual2— see thelabeledmodule docs on docs.rs for the (cheap)Arc<VarRegistry>ownership pattern shared by the forward wrappers. - Need an
Array2<f64>Jacobian? Enable thelabeled-ndarraysub-feature forLabeledJacobianandcompute_labeled_jacobian, which delegate tocompute_jacobian_revand decorate the output with row + column labels.
Caveat: LabeledTape is !Send (the inner reverse-mode tape is
thread-local). Run one tape per thread; multiple threads each get an
independent tape.
Dual2Vec — dense full Hessian in one forward pass
Dual2Vec is a dense multi-variable second-order forward-mode AD number
that propagates (value, gradient, Hessian) through a single forward
pass. It is the companion to the single-direction seeded Dual2<T>:
where Dual2<T> gives you the 1st + 2nd derivative along one direction,
Dual2Vec gives you the full n x n Hessian for all n active inputs
at once.
Enable via the dual2-vec feature:
[]
= { = "0.2", = ["dual2-vec"] }
Example: f(x, y) = x²y + y³ at (x, y) = (1, 2)
use Dual2Vec;
let x = variable; // x seeded on axis 0 of dim-2 input
let y = variable; // y seeded on axis 1
let f = & + &;
// Value: f = 1^2 * 2 + 2^3 = 10
assert_eq!;
// Gradient: [df/dx, df/dy] = [2xy, x^2 + 3y^2] = [4, 13]
assert_eq!;
assert_eq!;
// Hessian: [[2y, 2x], [2x, 6y]] = [[4, 2], [2, 12]]
assert_eq!;
assert_eq!;
assert_eq!;
assert_eq!;
When to use Dual2Vec vs seeded Dual2<T>
| Situation | Prefer |
|---|---|
Need the full n x n Hessian, n <~ 50 |
Dual2Vec |
| Need only the diagonal (own-gamma) | seeded Dual2<T> |
Need the full Hessian, n >~ 100 |
seeded Dual2<T> with n passes |
| Single-direction second derivative | seeded Dual2<T> |
Per-op cost is O(n^2) because the Hessian storage is dense n x n.
Between n ~ 50 and n ~ 100 the choice depends on the op mix —
benchmark both before committing.
Elementary surface
Dual2Vec supports 10 unary elementaries: sin, cos, tan, exp,
ln, sqrt, tanh, atan, asinh, erf, plus powf(k: f64) for a
constant power and powd(y: Dual2Vec) for x^y with both active.
Binary operators +, -, *, / are implemented in direct closed
form with structural Hessian symmetry (upper-triangle computed and
mirrored).
See src/dual2vec.rs module docs for the O(n^2) cost model, crossover
guidance, and the DO NOT derive Div as Mul o Recip rationale.
Examples
Runnable, real-world samples live under examples/. They are
Rust counterparts of the upstream C++ XAD samples, extended with
additional modes where interesting.
| Example | What it demonstrates |
|---|---|
jacobian.rs |
4x4 Jacobian of a non-trivial vector function (reverse mode). |
hessian.rs |
4x4 Hessian with analytic cross-check. |
fixed_rate_bond.rs |
YTM / duration / convexity via AReal, FReal, and Dual2, with timings. |
swap_pricer.rs |
Interest-rate swap DV01 and diagonal gamma via reverse mode, multi-var Dual, and Dual2. |
fx_option.rs |
Garman–Kohlhagen FX option price + 6-input gradient + spot gamma. |
Run any example with:
The financial examples print timing tables comparing the AD modes so you can see the reverse-vs-forward trade-off for a given problem shape.
Design notes
- No heavy dependencies. Only
num-traitsis required at build time;approxis a dev-dependency for tests. - Tape storage is thread-local. A
Tape<T>is activated for the current thread and allAReal<T>operations implicitly record to it. Deactivate before dropping to keep thread state clean. - Forward mode is allocation-light.
Dualkeeps the tangent vector in a singleVec<f64>and fuses operator loops so forward propagation is autovectorizable. - Shared sub-expressions are your friend. The
swap_pricerexample shows how sharing a per-tenor discount factor between the fixed and floating legs halves the tape size and the forward-tangent traffic. - Zero-alloc operator fast paths. After the April 2026 perf
refactor, every
ARealbinary op usesTape::push_binary/push_unaryfixed-arity helpers that push operands directly onto the tape — noVec::with_capacity(2)per op, no intermediate slice. SeeCHANGELOG.mdfor the 8-stage walkthrough.
Running the test suite
The integration suite under tests/integration_tests.rs
covers 50+ scenarios: basic operator correctness, transcendentals,
higher-order derivatives, Jacobian/Hessian helpers, and cross-mode
consistency checks.
License
xad-rs is licensed under the GNU Affero General Public License v3.0
or later (AGPL-3.0-or-later), matching the license of the upstream
XAD project. See LICENSE.md (verbatim copy of the
upstream XAD LICENSE.md) or the AGPL text at
https://www.gnu.org/licenses/agpl-3.0.html for the full terms.
If the AGPL is not compatible with your use case, please contact the upstream XAD maintainers at https://auto-differentiation.github.io about their commercial licensing options for the original C++ library.
Acknowledgements
- The C++ XAD library by the auto-differentiation team — the architectural inspiration for this port, and the source of the financial examples.
- The Rust
num-traitscrate for generic-scalar plumbing.