xad-rs
xad-rs is a Rust port of the C++ XAD
automatic differentiation library. It provides fast, ergonomic, and type-safe
forward and reverse mode automatic differentiation (AD) suitable for use in
scientific computing, quantitative finance, machine learning, optimization,
and any other setting where exact derivatives are required.
This is an unofficial, independent port and is not affiliated with the upstream XAD project. The original C++ XAD library is authored and maintained by the team at https://auto-differentiation.github.io.
Why XAD?
Automatic differentiation (a.k.a. algorithmic differentiation) computes
derivatives of arbitrary programs exactly, to machine precision,
without symbolic manipulation or finite-difference error. xad-rs exposes
several AD flavours so you can pick the cheapest mode for the problem at
hand:
| Type | Mode | Order | Seeds | Best for |
|---|---|---|---|---|
FReal<T> |
Forward (tangent-linear) | 1st | 1 direction | few inputs, many outputs |
Dual |
Forward, multi-variable | 1st | n directions in one pass |
full gradient in one forward sweep |
Dual2<T> |
Forward, second-order | 1st + 2nd | 1 direction | diagonal Hessian / gamma |
AReal<T> |
Reverse (adjoint), tape-based | 1st | 1 adjoint seed | many inputs, few outputs (gradients of scalar losses) |
Higher-level helpers:
jacobian::compute_jacobian_rev/compute_jacobian_fwd– full Jacobians via reverse or forward mode.hessian::compute_hessian– Hessians of scalar functions.checkpoint– memory-saving tape checkpointing for long reverse-mode sweeps.
Installation
Add xad-rs to your Cargo.toml:
[]
= "0.1"
Or track the development branch directly:
[]
= { = "https://github.com/sercanatalik/xad-rs" }
Minimum supported Rust version (MSRV): 1.85 (required by Rust edition 2024).
The library target is xad_rs (Cargo rewrites the - to _), so you
import it as:
use AReal;
use Dual;
use Dual2;
use FReal;
use Tape;
use ;
Quick start
Reverse mode (gradient of a scalar function)
use AReal;
use Tape;
// f(x, y) = x^2 * y + sin(x)
let mut tape = new;
tape.activate;
let mut x = new;
let mut y = new;
register_input;
register_input;
let mut f = & * &y + sin;
register_output;
f.set_adjoint;
tape.compute_adjoints;
println!;
println!; // 2xy + cos(x)
println!; // x^2
Forward mode, full gradient in one pass (Dual)
use Dual;
// f(x, y) = x^2 * y, at (3, 4)
let n = 2;
let x = variable;
let y = variable;
let f = & * &y;
assert_eq!;
assert_eq!; // df/dx = 2xy
assert_eq!; // df/dy = x^2
Second-order forward mode (Dual2)
use Dual2;
// f(x) = x^3 at x = 2
let x = variable;
let y = x * x * x;
assert_eq!;
assert_eq!; // 3x^2
assert_eq!; // 6x
Jacobian and Hessian helpers
use compute_jacobian_rev;
use compute_hessian;
// f: R^2 -> R^2, f(x, y) = [x*y, x + y]
let jac = compute_jacobian_rev;
// g: R^2 -> R, g(x, y) = x^2 * y + y^3
let hess = compute_hessian;
Examples
Runnable, real-world samples live under examples/. They are
Rust counterparts of the upstream C++ XAD samples, extended with
additional modes where interesting.
| Example | What it demonstrates |
|---|---|
jacobian.rs |
4x4 Jacobian of a non-trivial vector function (reverse mode). |
hessian.rs |
4x4 Hessian with analytic cross-check. |
fixed_rate_bond.rs |
YTM / duration / convexity via AReal, FReal, and Dual2, with timings. |
swap_pricer.rs |
Interest-rate swap DV01 and diagonal gamma via reverse mode, multi-var Dual, and Dual2. |
fx_option.rs |
Garman–Kohlhagen FX option price + 6-input gradient + spot gamma. |
Run any example with:
The financial examples print timing tables comparing the AD modes so you can see the reverse-vs-forward trade-off for a given problem shape.
Design notes
- No heavy dependencies. Only
num-traitsis required at build time;approxis a dev-dependency for tests. - Tape storage is thread-local. A
Tape<T>is activated for the current thread and allAReal<T>operations implicitly record to it. Deactivate before dropping to keep thread state clean. - Forward mode is allocation-light.
Dualkeeps the tangent vector in a singleVec<f64>and fuses operator loops so forward propagation is autovectorizable. - Shared sub-expressions are your friend. The
swap_pricerexample shows how sharing a per-tenor discount factor between the fixed and floating legs halves the tape size and the forward-tangent traffic. - Zero-alloc operator fast paths. After the April 2026 perf
refactor, every
ARealbinary op usesTape::push_binary/push_unaryfixed-arity helpers that push operands directly onto the tape — noVec::with_capacity(2)per op, no intermediate slice. SeeCHANGELOG.mdfor the 8-stage walkthrough.
Running the test suite
The integration suite under tests/integration_tests.rs
covers 50+ scenarios: basic operator correctness, transcendentals,
higher-order derivatives, Jacobian/Hessian helpers, and cross-mode
consistency checks.
License
xad-rs is licensed under the GNU Affero General Public License v3.0
or later (AGPL-3.0-or-later), matching the license of the upstream
XAD project. See LICENSE.md (verbatim copy of the
upstream XAD LICENSE.md) or the AGPL text at
https://www.gnu.org/licenses/agpl-3.0.html for the full terms.
If the AGPL is not compatible with your use case, please contact the upstream XAD maintainers at https://auto-differentiation.github.io about their commercial licensing options for the original C++ library.
Acknowledgements
- The C++ XAD library by the auto-differentiation team — the architectural inspiration for this port, and the source of the financial examples.
- The Rust
num-traitscrate for generic-scalar plumbing.