xad-rs 0.1.1

Automatic differentiation library for Rust — forward/reverse mode AD, a Rust port of the C++ XAD library (https://github.com/auto-differentiation/xad)
Documentation

xad-rs

Crates.io Docs.rs License: AGPL-3.0-or-later MSRV: 1.85 Edition 2024

xad-rs is a Rust port of the C++ XAD automatic differentiation library. It provides fast, ergonomic, and type-safe forward and reverse mode automatic differentiation (AD) suitable for use in scientific computing, quantitative finance, machine learning, optimization, and any other setting where exact derivatives are required.

This is an unofficial, independent port and is not affiliated with the upstream XAD project. The original C++ XAD library is authored and maintained by the team at https://auto-differentiation.github.io.


Why XAD?

Automatic differentiation (a.k.a. algorithmic differentiation) computes derivatives of arbitrary programs exactly, to machine precision, without symbolic manipulation or finite-difference error. xad-rs exposes several AD flavours so you can pick the cheapest mode for the problem at hand:

Type Mode Order Seeds Best for
FReal<T> Forward (tangent-linear) 1st 1 direction few inputs, many outputs
Dual Forward, multi-variable 1st n directions in one pass full gradient in one forward sweep
Dual2<T> Forward, second-order 1st + 2nd 1 direction diagonal Hessian / gamma
AReal<T> Reverse (adjoint), tape-based 1st 1 adjoint seed many inputs, few outputs (gradients of scalar losses)

Higher-level helpers:


Installation

Add xad-rs to your Cargo.toml:

[dependencies]
xad-rs = "0.1"

Or track the development branch directly:

[dependencies]
xad-rs = { git = "https://github.com/sercanatalik/xad-rs" }

Minimum supported Rust version (MSRV): 1.85 (required by Rust edition 2024).

The library target is xad_rs (Cargo rewrites the - to _), so you import it as:

use xad_rs::areal::AReal;
use xad_rs::dual::Dual;
use xad_rs::dual2::Dual2;
use xad_rs::freal::FReal;
use xad_rs::tape::Tape;
use xad_rs::{math, jacobian, hessian};

Quick start

Reverse mode (gradient of a scalar function)

use xad_rs::areal::AReal;
use xad_rs::tape::Tape;

// f(x, y) = x^2 * y + sin(x)
let mut tape = Tape::<f64>::new(true);
tape.activate();

let mut x = AReal::new(3.0);
let mut y = AReal::new(4.0);
AReal::register_input(std::slice::from_mut(&mut x), &mut tape);
AReal::register_input(std::slice::from_mut(&mut y), &mut tape);

let mut f = &(&x * &x) * &y + xad_rs::math::ad::sin(&x);

AReal::register_output(std::slice::from_mut(&mut f), &mut tape);
f.set_adjoint(&mut tape, 1.0);
tape.compute_adjoints();

println!("f        = {}",  f.value());
println!("df/dx    = {}",  x.adjoint(&tape)); // 2xy + cos(x)
println!("df/dy    = {}",  y.adjoint(&tape)); // x^2

Forward mode, full gradient in one pass (Dual)

use xad_rs::dual::Dual;

// f(x, y) = x^2 * y, at (3, 4)
let n = 2;
let x = Dual::variable(3.0, 0, n);
let y = Dual::variable(4.0, 1, n);
let f = &(&x * &x) * &y;

assert_eq!(f.real(), 36.0);
assert_eq!(f.partial(0), 24.0); // df/dx = 2xy
assert_eq!(f.partial(1),  9.0); // df/dy = x^2

Second-order forward mode (Dual2)

use xad_rs::dual2::Dual2;

// f(x) = x^3 at x = 2
let x = Dual2::variable(2.0_f64);
let y = x * x * x;
assert_eq!(y.value(), 8.0);
assert_eq!(y.first_derivative(),  12.0); // 3x^2
assert_eq!(y.second_derivative(), 12.0); // 6x

Jacobian and Hessian helpers

use xad_rs::jacobian::compute_jacobian_rev;
use xad_rs::hessian::compute_hessian;

// f: R^2 -> R^2,  f(x, y) = [x*y, x + y]
let jac = compute_jacobian_rev(&[3.0, 5.0], |v| {
    vec![&v[0] * &v[1], &v[0] + &v[1]]
});

// g: R^2 -> R,  g(x, y) = x^2 * y + y^3
let hess = compute_hessian(&[2.0, 3.0], |v| {
    let x2 = &v[0] * &v[0];
    let y3 = &v[1] * &v[1] * &v[1];
    x2 * &v[1] + y3
});

Examples

Runnable, real-world samples live under examples/. They are Rust counterparts of the upstream C++ XAD samples, extended with additional modes where interesting.

Example What it demonstrates
jacobian.rs 4x4 Jacobian of a non-trivial vector function (reverse mode).
hessian.rs 4x4 Hessian with analytic cross-check.
fixed_rate_bond.rs YTM / duration / convexity via AReal, FReal, and Dual2, with timings.
swap_pricer.rs Interest-rate swap DV01 and diagonal gamma via reverse mode, multi-var Dual, and Dual2.
fx_option.rs Garman–Kohlhagen FX option price + 6-input gradient + spot gamma.

Run any example with:

cargo run --release --example swap_pricer
cargo run --release --example fixed_rate_bond
cargo run --release --example fx_option
cargo run --release --example hessian
cargo run --release --example jacobian

The financial examples print timing tables comparing the AD modes so you can see the reverse-vs-forward trade-off for a given problem shape.


Design notes

  • No heavy dependencies. Only num-traits is required at build time; approx is a dev-dependency for tests.
  • Tape storage is thread-local. A Tape<T> is activated for the current thread and all AReal<T> operations implicitly record to it. Deactivate before dropping to keep thread state clean.
  • Forward mode is allocation-light. Dual keeps the tangent vector in a single Vec<f64> and fuses operator loops so forward propagation is autovectorizable.
  • Shared sub-expressions are your friend. The swap_pricer example shows how sharing a per-tenor discount factor between the fixed and floating legs halves the tape size and the forward-tangent traffic.
  • Zero-alloc operator fast paths. After the April 2026 perf refactor, every AReal binary op uses Tape::push_binary / push_unary fixed-arity helpers that push operands directly onto the tape — no Vec::with_capacity(2) per op, no intermediate slice. See CHANGELOG.md for the 8-stage walkthrough.

Running the test suite

cargo test

The integration suite under tests/integration_tests.rs covers 50+ scenarios: basic operator correctness, transcendentals, higher-order derivatives, Jacobian/Hessian helpers, and cross-mode consistency checks.


License

xad-rs is licensed under the GNU Affero General Public License v3.0 or later (AGPL-3.0-or-later), matching the license of the upstream XAD project. See LICENSE.md (verbatim copy of the upstream XAD LICENSE.md) or the AGPL text at https://www.gnu.org/licenses/agpl-3.0.html for the full terms.

If the AGPL is not compatible with your use case, please contact the upstream XAD maintainers at https://auto-differentiation.github.io about their commercial licensing options for the original C++ library.


Acknowledgements

  • The C++ XAD library by the auto-differentiation team — the architectural inspiration for this port, and the source of the financial examples.
  • The Rust num-traits crate for generic-scalar plumbing.