ARAEL
Algorithms for Robust Autonomy, Estimation, and Localization
A Rust framework for nonlinear optimization with compile-time symbolic differentiation. Define your model and constraints declaratively -- the macro system symbolically differentiates, applies common subexpression elimination, and generates compiled cost, gradient, and Gauss-Newton hessian (J^T J approximation) code.
Contents
- Features
- Scope
- Quick Example: Symbolic Math
- Quick Example: Robust Linear Regression
- SLAM Path Optimization
- Starship robust error suppression
- Localization Demo
- Examples
- Solvers
- Runtime Differentiation
- Instrumentation and troubleshooting
- 2D Sketch Editor
- Project Structure
- License
Features
- Symbolic math -- expression trees with automatic differentiation, simplification, expansion, LaTeX/Rust code generation
- Compile-time constraint code generation -- write constraints symbolically, get compiled derivative code with CSE
- Levenberg-Marquardt solver -- with robust error suppression via the Starship method (US12346118)
gamma * atan(r / gamma)and switchable constraints (guard = expr) - Multiple solver backends via
LmSolvertrait:- Dense Cholesky (nalgebra) -- fixed-size dispatch up to 9x9, dynamic for larger
- Band Cholesky -- pure Rust O(n*kd^2) for block-tridiagonal systems (9.4x faster than dense at 500 poses)
- Sparse Cholesky (faer, pure Rust) -- for general sparse hessians (66x faster than dense at 200 poses with 6% fill)
- Eigen SimplicialLLT and CHOLMOD -- optional C++ backends via FFI (
--features eigen,--features cholmod) - LAPACK band -- optional dpbsv/spbsv backend (
--features lapack)
- Indexed sparse assembly -- precomputed position lists for zero-overhead hessian assembly after first iteration
- f32 and f64 precision --
#[arael(root)]for f64,#[arael(root, f32)]for f32 throughout - Model trait -- hierarchical serialize/deserialize/update protocol for parameter optimization
- Type-safe references --
Ref<T>,Vec<T>,Deque<T>,Arena<T>for indexed collections with stable references - Runtime differentiation -- parse equations from strings at runtime, auto-differentiate symbolically, and optimize via
ExtendedModel+TripletBlock(used by the sketch editor for parametric expression dimensions) - User-defined functions -- plug custom symbolic or native-eval operators into constraint bodies with
#[arael::function]. - Hessian blocks --
SelfBlock<A>andCrossBlock<A, B>for 1- and 2-entity constraints (packed dense);TripletBlockfor 3+ entities (COO sparse) - Jacobian computation --
#[arael(root, jacobian)]generatescalc_jacobian()returning a sparse Jacobian matrix for DOF analysis and constraint diagnostics (seeexamples/jacobian_demo.rs) - Gimbal-lock-free rotations --
EulerAngleParamoptimizes a small delta around a reference rotation matrix - WASM/browser support -- the sketch editor compiles to WebAssembly and runs in the browser via eframe/egui
Scope
Arael is a nonlinear optimization framework, not a complete SLAM or state estimation system. The SLAM and localization demos show how to use arael as the optimizer backend, but a production SLAM pipeline would additionally need:
- Front-end perception: feature detection, descriptor extraction
- Data association: matching observed features to existing landmarks, handling ambiguous or incorrect matches
- Landmark management: initializing new landmarks from observations, merging duplicates, pruning unreliable ones
- Keyframe selection: deciding when to add new poses vs. discard redundant frames
- Loop closure: detecting revisited places, verifying loop closure candidates, and injecting constraints
- Outlier rejection logic: deciding which observations to reject
- Marginalization / sliding window: limiting optimization scope for real-time operation, marginalizing old poses while preserving their information
- Map management: spatial indexing, map saving/loading, multi-session map merging
Arael provides the compile-time-differentiated solver that sits at the core of such a system. Everything above is application-level logic that builds on top of it.
Quick Example: Symbolic Math
use *;
use sym;
use hashmap;
sym!
The sym! macro auto-inserts .clone() on variable reuse, so you write natural math without Rust's ownership boilerplate.
See docs/SYM.md for the full symbolic math reference.
Quick Example: Robust Linear Regression
You describe the model as a Rust struct and the residual as an arael-sym expression; the macros do the rest.
#[arael::model]auto-implements theModeltrait for the struct: serialize / deserialize / update of every optimizable parameter, flat indexing into the residual vector, and all the hooks the solver needs.- Every
Param<T>field is an optimization variable. Plain fields (data,sigma,gammahere) are constants. #[arael(fit(data, |e| ...))]declares a least-squares fit: one residual per element ofdata, body written as a symbolic expression referencing model fields and the current data entry. The macro compiles the body into residual + gradient + Hessian code with symbolic differentiation and CSE.
The gamma * atan(plain_r / gamma) wrapper is the Starship robust error-suppression method -- residuals up to ~gamma pass through linearly, beyond that they saturate, suppressing outlier influence while staying smoothly differentiable.
The macro auto-generates calc_cost(), calc_grad_hessian(), and fit() methods with symbolically differentiated, CSE-optimized compiled code:
The robust fit ignores outliers while tracking the inlier data:

See docs/LINEAR.md for the full walkthrough. Full source: examples/linear_demo.rs.
SLAM Path Optimization
The earlier regression example fitted two scalar parameters against one residual. Real SLAM and bundle-adjustment problems have many coupled entities -- poses, landmarks, cameras -- with many constraint types between them. arael models the hierarchy as plain Rust structs, each annotated with #[arael::model] and one or more #[arael(constraint(...))] attributes. The macros walk the hierarchy at compile time, differentiate every residual symbolically, eliminate common subexpressions, and emit one fused calc_cost + calc_grad_hessian pair for the whole graph.
The demo (examples/slam_demo.rs) generates a synthetic S-curve trajectory with 60 poses and 240 point landmarks observed by 5 cameras. It handles 50% outlier associations with 30x pixel noise via robust suppression and graduated optimization. The solver uses faer sparse Cholesky (pure Rust) to exploit the hessian's sparsity structure.
Each entity owns its own parameters and its own SelfBlock -- the diagonal block of the Hessian for that entity. Constraints that touch a single entity accumulate into its self block; constraints that couple two entities (an odometry residual between two poses, a bearing residual between a landmark and a pose) accumulate into a CrossBlock between the pair. The assembled Hessian therefore mirrors the model hierarchy: one block row/column per entity, a self block on the diagonal, and a cross block off-diagonal wherever a constraint ties two entities together. Entities that never share a constraint remain exactly zero in that corner of the matrix -- which is where the sparsity comes from.

The pattern in the S-curve demo above shows pose-pose blocks (upper-left), pose-landmark coupling (off-diagonal), and landmark self-blocks (lower-right diagonal). The faer sparse Cholesky solver exploits this, achieving 66x speedup over dense at 200 poses.
A Pose is the robot's 6-DOF state at one timestep. Three constraint attributes stack on the same hb_pose Hessian block: a guarded GPS constraint (active only when GPS data is present), a drift regularizer that stabilises graduated optimization, and an accelerometer-based tilt constraint on roll and pitch. Every Param<...> is an optimization variable; info holds per-timestep measurements.
}))]
A frine (project vocabulary) is a structure that ties one entity to one measurement. PointFrine is the frine for a point landmark: it binds a PointLandmark to a PointFeature -- the 2D detection in one of the pose's cameras that observed it. In factor-graph SLAM terms, a frine plays the role of a Factor (GTSAM) or an Edge (g2o). The measurement itself is pre-processed once at set-up time into a 3D direction ray in the camera frame (stored on PointFeature), so the solver never touches pixel coordinates or undistortion. Staying in 3D keeps derivatives smooth and sidesteps the projective singularities that show up when you differentiate through a pixel-space reprojection.
The residual transforms the landmark into the pose's frame and then into the camera's feature frame (feature.mf2r), and compares its direction to the stored measurement via two atan2 bearings (azimuth and elevation). Each bearing is whitened by the feature's per-axis isigma and passed through the robust gamma * atan(.../gamma) wrapper for outlier tolerance.
The #[arael(ref = ...)] attributes declare which collection each reference resolves against -- pose from root.poses, feature chained off pose.info.features -- and the constraint uses a CrossBlock<PointLandmark, Pose> because it couples two entity types.
}))]
PosePair is the odometry constraint between two consecutive poses -- a relative-motion residual whitened by a decomposed covariance. Another CrossBlock, this time Pose-to-Pose.
Finally, Path ties it all together. #[arael(root)] is what actually triggers code generation: the macro walks every constraint attribute on every reachable struct, resolves the refs, and emits calc_cost() / calc_grad_hessian() for the whole model hierarchy.
See docs/SLAM.md for the full walkthrough.
Starship robust error suppression
Both demos wrap every residual in $\gamma \arctan(r / \gamma)$. This is the Starship method (US Patent 12,346,118) -- a way to cap how much a single outlier can move the optimum without losing smooth differentiability. This section explains what it does and why.
Setup
Given sensor readings stacked into a vector $L$, model parameters $M$ (poses, landmarks, etc.), and a prediction $\mu(M)$ of what the sensors should report given $M$, Bayesian inference with $L \mid M \sim \mathcal{N}(\mu(M), K_L)$ -- where $K_L$ is the sensor covariance matrix -- leads to minimising the sum
$$ S(M) = (L - \mu(M))T K_L{-1} (L - \mu(M)). $$
Whitening. Diagonalising $K_L = R D RT$ and substituting $LD = RT L$, $G(M) = RT \mu(M)$ turns the quadratic form into a plain sum of squares in units of standard deviations:
$$ S(M) = \sum_i r_i2, \qquad r_i = \frac{L_iD - G_i(M)}{\sigma_i}. $$
The solver minimises $S(M)$ (the Gauss-Newton / LM step), and every inner term $r_i$ is dimensionless -- a pure sigma count.
The outlier problem
Each $r_i^2$ grows as the square of the measurement error. A single bad association at $10\sigma$ already contributes $100$ to the sum; at $30\sigma$ it contributes $900$. A handful of bad measurements drown out the signal from hundreds of good ones and pull the optimum off.
The usual robust-loss fixes -- L1 ($|r|$) and Huber (quadratic near zero, linear past a threshold) -- replace $r^2$ with something that grows slower than quadratically, which limits but does not cap each residual's contribution; a single very bad outlier can still pull the solution. Their derivatives are also awkward: L1 has a kink at $r = 0$ (undefined derivative there), Huber has a kink at the quadratic-to-linear transition (continuous but not smooth), and Gauss-Newton wants a smooth Jacobian. We want a loss that is both fully bounded and smooth everywhere.
The cap
We look for a function $\alpha(x)$ that behaves like $x$ in the normal range but saturates for large inputs, so that $\alpha(x)2$ contributes a bounded amount $\Delta S_{\max}$ to the sum instead of an unbounded $x2$.
A clean choice is
$$ \alpha(x) = \gamma \arctan\frac{x}{\gamma}, \qquad \gamma = \frac{2 \sqrt{\Delta S_{\max}}}{\pi}. $$
The $\gamma$ value follows from the saturation requirement: as $|x| \to \infty$, $\arctan(x/\gamma) \to \pm \pi/2$, so $\alpha(x)^2 \to (\gamma \pi / 2)^2$; setting that equal to $\Delta S_{\max}$ and solving gives the $\gamma$ above. Three further properties fall out:
- $\alpha(x) \approx x$ for $|x| \sim 1$ -- small residuals pass through unchanged.
- $\alpha'(0) = 1$, so near the optimum the loss is indistinguishable from plain $r^2$.
- $\alpha(x)^2 \to \Delta S_{\max}$ as $|x| \to \infty$ -- no single residual can push the sum by more than $\Delta S_{\max}$.

Left: $\alpha(x)$ (green) bends away from the identity $x$ (purple) once $|x|$ moves past a few sigmas. Right: the squared contribution -- the unbounded $x^2$ parabola vs the saturating $\alpha(x)^2$, capped at $\Delta S_{\max}$. The cap is also smooth; Gauss-Newton still sees a well-defined Jacobian everywhere.
Using it
Replace each $r_i$ in the sum with $\alpha(r_i)$:
$$ \hat{S}(M) = \sum_i \alpha(r_i)^2 = \sum_i \left[ \gamma \arctan\frac{L_i^D - G_i(M)}{\gamma \sigma_i} \right]^2. $$
In practice $\Delta S_{\max}$ in the range $[10, 25]$ (so $\gamma$ between roughly $2$ and $3$) suppresses genuine outliers hard without biasing inlier-dominated regions. Since residuals are already sigma-scaled, this corresponds roughly to saying "residuals past $3$ to $5\sigma$ stop mattering".
In arael this is exactly what you see in the demo constraint bodies:
let plain_r = / sigma;
gamma * atan
The symbolic-differentiation pipeline handles atan's derivative automatically; from the macro's point of view the residual is just another expression. No special-case code, no outlier bookkeeping.
Initialisation matters
Gauss-Newton (and Levenberg-Marquardt) is a local method: each step linearises the cost around the current $M$ and moves in the direction that linearisation suggests. For any loss, you need a starting $M_0$ close enough to the optimum that the linearisation is informative.
Starship makes this requirement stricter. The gradient falls off as $\alpha'(r) = 1 / (1 + \pi2 r2 / (4 \Delta S_{\max}))$, so at the recommended $\Delta S_{\max} = 25$ a residual at $5\sigma$ still carries about 29% of its least-squares pull and a $10\sigma$ residual about 9% -- still usable. Once you get out to $20\sigma$ and beyond it drops under 3% and those residuals are effectively frozen. If $M_0$ puts many residuals that far out, the solver has nothing to work with and stalls. The usual remedy is graduated optimisation: start with a large $\Delta S_{\max}$ (loose cap, everything in the informative regime), solve, then shrink it across passes down to the target value. The SLAM demo does this via a frine_isigma_scale field stepped per pass.
Localization Demo
Same model as SLAM but landmarks are fixed (known map). Since landmark positions are not optimized, there is no gauge freedom and absolute pose errors are meaningful. No GPS needed -- the known landmarks anchor the solution.
The frine constraint uses a remote block (pose.hb_pose) -- the hessian block lives on Pose, not on PointFrine, since only Pose has parameters. With only pose parameters, the hessian is block-tridiagonal (kd=11 for 6-param poses), so the band solver can be used for O(n) scaling instead of O(n^3) dense -- 9.4x faster at 500 poses.
See examples/loc_demo.rs.
Examples
The examples/ directory is the primary place to see the API in use. Each file is a runnable cargo run --release --example <name>.
- bench_band -- benchmarks the band Cholesky backend against dense on the localisation model at increasing pose counts. Prints timing + speedup.
- bench_investigate -- deeper comparison of sparse backends (faer, schur) on SLAM, with assembly vs solve breakdown and numeric cross-check of the solutions.
- bench_sparse -- sparse Cholesky backends (faer / schur) vs dense on SLAM.
- calc_demo --
bc-style REPL calculator built onarael-sym. Showsparse_with_functions+FunctionBagfor user-defined functions, persistent history via rustyline. - jacobian_demo --
#[arael(root, jacobian)],#[arael(constraint_index)], andcalc_jacobian/calc_cost_tablewalk-through. End-to-end reference for the instrumentation features used in convergence debugging. - linear_demo -- robust linear regression on noisy 2D data. Residual wrapped in
gamma * atan(r / gamma)-- the Starship method (US12346118), same robustifier used by the feature constraints in loc/SLAM. Minimal single-struct model + LM fit, compared against plain closed-form least squares. - loc_demo -- localisation with fixed known landmarks (no gauge freedom). Block-tridiagonal Hessian + band solver. Graduated-isigma optimisation via a root
frine_isigma_scalefield. - loc_global_demo -- how to put
Paramfields on the root struct and have constraints consume them. Uses a system-global rigid transform (translation + 3-axis rotation applied to every pose) as the running example; every residual that reads the robot's world pose composes the globals before evaluating. Shows the two wiring shapes for pose<->root cross-Hessian pairs (CrossBlock<Pose, Path>on the constraint struct, and a root-ownedTripletBlocknamed via theroot.<field>block spec) and aPath::optimise_centerpass that freezes pose params and optimises only the globals before the main sweep. - model_demo -- minimal
#[arael::model]walk-through showing howParam,SimpleEulerAngleParam, and the update cycle fit together. - refs_demo --
Ref<T>,refs::Vec,refs::Deque, andrefs::Arenabehaviour: insertion, iteration, stable handles. - runtime_fit_demo -- curve fitting where the residual equation is a string parsed at runtime. Demonstrates
ExtendedModel+ robust loss on top of the symbolic front end. - single_root_demo -- single-struct model-and-root + a direct-composed sub-model, each carrying its own
SelfBlock<Self>. The smallest example that exercises the "root has its own params" path. - slam_demo -- synthetic visual-inertial SLAM: S-curve trajectory, 20 poses, 40 landmarks, odometry + tilt + GPS + feature observations. Full verbose-LM trace across graduated isigma passes -- the reference for what a healthy solver run looks like.
- sym_demo -- symbolic-math tour: expression building, automatic differentiation, CSE, pretty printing, parsing. No solver involvement; pure
arael-sym. - user_function_demo --
#[arael::function]for user-defined operators in constraint bodies. Form A purely symbolicsigmoid(x) = 1 / (1 + exp(-x))and Form B opaque numericalmy_safe_asinwith a closed-form symbolic derivative, both used in a single two-residual LM fit.
Solvers
Levenberg-Marquardt with pluggable linear-algebra backends. Full reference: docs/SOLVERS.md.
Default to solve_sparse_faer_f32 (or solve_sparse_faer for f64).
For most real problems the Hessian is sparse enough for sparse
Cholesky to be the right choice, and faer is pure Rust, no
external dependency, and handles the full sparsity pattern of a
SLAM-like problem.
| Backend | When |
|---|---|
solve_sparse_faer[_f32] |
default. Any non-trivial problem |
solve[_f32] (dense) |
toy problems (≤ 4 parameters) |
solve_band[_f32] |
only when the Hessian is genuinely block-tridiagonal with a known half-bandwidth kd |
LmConfig controls the solve -- convergence tolerances, iteration
caps, initial lambda, and verbose (turn it on first when
debugging). Defaults are a safe middle ground; production solves
usually want max_iters and rel_precision tuned for the
performance/quality trade-off that actually matters for the
problem. See docs/SOLVERS.md for the full field
reference and a recipe for picking them.
Runtime Differentiation
Compile-time differentiation generates optimized Rust code with CSE at build time -- ideal when the model structure is fixed. But many applications need equations that are only known at runtime: user-typed formulas in a CAD parametric dimension, configuration-driven curve fitting, or symbolic constraints loaded from a file.
Arael supports this through runtime differentiation: parse an equation string with arael_sym::parse, symbolically differentiate once at setup with E::diff, then evaluate the expression tree numerically each solver iteration. The ExtendedModel trait and TripletBlock provide the integration point with the LM solver.
The sketch editor (arael-sketch) uses this extensively for parametric expression dimensions -- a user can type d0 * 2 + 3 as a dimension value, and the solver constrains the geometry to satisfy the equation in real time, with full symbolic derivatives.
// Parse equation at runtime, differentiate symbolically
let expr = parse.unwrap;
let residual = expr - symbol;
let dr_da = residual.diff; // symbolic derivative w.r.t. a
let dr_db = residual.diff; // symbolic derivative w.r.t. b
// In ExtendedModel::extended_compute64(params, grad) -- each solver iteration:
for & in &data
The demo accepts an arbitrary equation from the command line:
Full source: examples/runtime_fit_demo.rs.
Instrumentation and troubleshooting
My solve doesn't converge. What do I check?
-
Turn on solver verbose mode first. Set
verbose: trueonLmConfigand every LM step prints cost, lambda, and the step outcome. On a Cholesky rejection the line also reports non-finite counts for grad / diagonal / cur_x / matrix and a count of non-positive diagonal entries -- four quick signals that narrow the problem before any deeper digging:let cfg = ; let result = solve_sparse_faer_f32;A healthy pass looks like steady cost drops with rising / stabilising step sizes and no Cholesky rejections -- see examples/slam_demo.rs for a reference trace. If verbose already reports NaN / Inf or
diag<=0, skip to steps 2 / 3 below; otherwise continue to the cost-by-label breakdown. -
Cost breakdown by label. Name your constraint attributes with
#[arael(constraint(hb, name = "drift", { ... }))]so each group shows up under its own label in the sum-of-squares. Callmodel.calc_cost_table(¶ms)for aHashMap<&'static str, T>and log it. A single label dominating the total is usually the culprit -- either an overly tight sigma, bad initial values for its inputs, or a constraint that's mathematically unsatisfiable. -
NaN or Inf residuals / derivatives. The verbose-mode output from step 0 already tells you whether grad / matrix / params contain non-finite values at the failing step. If they do, walk
model.calc_jacobian(¶ms).rowsto find the specific row. A NaN residual or partial derivative usually means asqrt,acos,asin, oratan2saw a degenerate input (zero-length vector, both-zero arguments,|x| > 1for asin/acos).arael-symshipssafe_sqrt,safe_asin,safe_acos,safe_atan2that clamp / regularise at the singular point and produce non-diverging derivatives. Before reaching for them, prefer to redesign the constraint so the singularity can't be hit. Asafe_*wrapper hides the degeneracy from the solver and may leave the residual insensitive to the parameters that should drive it out of the singular region; an equivalent constraint formulated on the right geometric quantity avoids the singularity entirely. E.g. match 3D landmarks to features in 3D space (compare world-frame directions or positions) instead of projecting through a camera model and computing 2D image-plane residuals -- the 3D formulation is simpler, better conditioned, and has no pixel-wraparound / behind-camera pathology. -
Non-positive diagonal. The verbose-mode
diag<=0: Ncounter at a Cholesky rejection is the loudest possible signal that some parameter is untouched by every constraint (indices left atu32::MAX) or is receiving a negative contribution. Either outcome is a bug distinct from f32 accumulation noise. -
Gradient magnitude. After
calc_grad_hessian_dense, the maximum absolute gradient component should be small relative to the cost scale at a local minimum. A huge gradient with tiny cost means the parameter scaling is off -- one parameter moves cost several orders of magnitude more than another, which destabilises Levenberg-Marquardt. -
Hessian health. The same
hessianarray should be finite and positive-semi-definite at a minimum (smallest eigenvalue ≥ 0 modulo roundoff). A significantly-negative smallest eigenvalue means the Gauss-Newton approximationJ^T Jis a poor local fit -- often because constraints are ill-conditioned or cancel. -
Stiffness. Ratios between the smallest and largest sigmas (or between the smallest and largest eigenvalues of
J^T J) that span many orders of magnitude make the problem numerically stiff. LM damping has to pick a lambda that suits both ends, which is hard at f32 precision. Keep isigmas comparable where you can; if a tight constraint dominates one direction, a gauge direction orthogonal to it will starve for signal. Starting with a loose scale and ramping up (graduated optimisation -- seeloc_demo/slam_demofor thefrine_isigma_scalepattern) helps LM climb a stiff problem without rejecting early steps. -
Simpler math beats clever math. Reformulate residuals on the most natural geometric quantity. 3D direction / position errors are cheaper and better-conditioned than 2D reprojection errors; relative rotations compared as matrices or unit quaternions avoid Euler-angle gimbal lock; distances compared in squared form avoid
sqrtderivatives near zero. Every nonlinear operation you remove is one less place for the residual / derivative to misbehave and one less source of stiffness. -
Inspect the generated code. Use
cargo expandto see what the macro emitted for your constraint body -- see Looking under the hood below. -
Rank / DOF. Call
Jacobian::singular_values(or the fullJacobian::svdfor directions). Near-zero singular values count the degrees of freedom. If this is higher than you expect, the model is under-constrained. The right singular vectors (columns ofSvdResult::v) corresponding to σ ≈ 0 name the unconstrained parameter directions -- useful for identifying which parameters are free. SVD is always performed in f64 regardless of the model's element type, so rank detection stays reliable even for f32 models.
Looking under the hood with cargo expand
Mastering arael means being able to read what the macros actually generated for your equations. #[arael::model] does a lot: it interprets the constraint body symbolically, differentiates it against every reachable parameter, runs common-subexpression elimination, and emits Rust code for three call paths (__compute_blocks, __set_block_indices, calc_jacobian). cargo expand (cargo install cargo-expand) prints the expansion exactly as the compiler sees it.
# or, for your own crate:
Example: a one-line fix constraint
The single-root demo declares
cargo expand --example single_root_demo shows the macro emits a __compute_blocks method with a block like:
/// arael: SingleRoot[fix_x] @ examples/single_root_demo.rs:28
let __r_0 = singleroot.isigma * ;
let __dr_0_0 = singleroot.isigma; // d/d x
let __dr_0_1 = 0.0; // d/d y
__item.hb.add_residual;
Things to notice:
singleroot.x.work()-- each param access is rewritten towork()so the LM trial step is used in place of the stored value without mutating it.- Derivatives for every param the constraint touches appear individually (
__dr_0_0,__dr_0_1). The0.0entry foryis not elided because the index intohbis positional; dead rows fold out at optimisation time. - The residual and the partials flow into the entity's Hessian block via
hb.add_residual(r, dr, grad)-- one call per residual, accumulating2*r*drintogradand2*dr_i*dr_jinto the block's packed upper triangle. - The
/// arael: ...doc comment is a source marker pointing at the constraint attribute the block came from -- invaluable when the expansion runs to thousands of lines.
Example: shared subexpressions
In a larger body -- say a landmark observation that builds a rotation matrix and reuses it across x/y/z residuals -- the macro runs CSE before emitting code, so you see lines like
let __cse_0 = cos;
let __cse_1 = sin;
let __cse_2 = __cse_0 *
+ __cse_1 * ;
// __cse_2 reused in __r_0, __r_1, and every __dr_* that needs it
Reading these tells you what the compiler actually has to evaluate -- useful for understanding the cost of a constraint, spotting accidental non-shared work, and sanity-checking that symbolic simplification collapsed things you expected it to.
What to look for
__set_block_indices-- where eachSelfBlock/CrossBlock/TripletBlockgets its global parameter indices written into place. A block that isn't touched here is invisible to the solver (itsu32::MAXsentinel causes everyadd_residualto silently skip) -- a common failure mode.__compute_blocks-- the grad + block-Hessian accumulation path. Each constraint is a nested block with its own CSE'd body.calc_jacobian-- same body structure but builds aJacobianRowper residual instead of accumulating into the blocks. Generated only when you declare#[arael(root, jacobian)].- source markers -- doc comments like
/// arael: PointFrine[<name>] @ path/to/file.rs:NNNpinpoint the constraint attribute each block came from.
Expansion grows quickly (the single-root demo is ~800 lines; a full SLAM model is several thousand). Use sed -n or a pager scoped to the method you care about:
|
2D Sketch Editor
An interactive constraint-based 2D sketch editor built on the arael optimization framework. Draw geometry, apply constraints, and the solver keeps everything consistent in real time.
The sketch solver combines both differentiation modes:
- Geometric constraints (horizontal, coincident, parallel, tangent, etc.) use compile-time differentiation -- the macro generates optimized Gauss-Newton code with CSE for each constraint type.
- Parametric dimensions use runtime differentiation -- the user types an expression like
d0 * 2 + 3as a dimension value, and the solver parses it, differentiates symbolically, and constrains the geometry to satisfy the equation in real time. Dimensions can reference each other, entity properties (L0.length,A0.radius), and arithmetic expressions. Broken references (deleted entities) are detected and the dimension falls back to its last computed value.
This makes the sketch editor a fully parametric constraint solver where changing one dimension propagates through all dependent expressions.
Running (native)
Running (browser)
The sketch editor compiles to WebAssembly and runs in the browser.
Requires trunk (cargo install trunk) and the
wasm32-unknown-unknown target (rustup target add wasm32-unknown-unknown):
# Open http://localhost:8080
Tools
- Line (L), Circle (O), Arc (A), Point (P) -- draw geometry with auto-snap to nearby points, endpoints, and curves
- Dimension (D) -- add length, distance, radius, angle, and point-to-line distance dimensions with draggable annotations. Supports numeric values and parametric expressions (
d0 * 2,L0.length + 3). - Select (S) -- click to select, drag to move entities, Backspace/Delete to remove
- Dark/Light mode toggle, Save/Load (JSON), Undo/Redo (Ctrl+Z/Ctrl+Shift+Z)
Constraints
Horizontal (H), Vertical (V), Coincident (C), Parallel, Perpendicular, Equal length/radius, Tangent (T), Collinear, Midpoint (M), Symmetry (lines or points about a mirror line), Lock (K), Line style (X). Constraints are visualized as symbols on the geometry and can be selected and deleted.
Example: Sketch Solver API
use CrossBlock;
use vect2d;
use *;
let mut sketch = new;
// Create a rectangle from 4 lines
let bottom = sketch.add_line;
let right = sketch.add_line;
let top = sketch.add_line;
let left = sketch.add_line;
// Horizontal/vertical constraints
sketch.lines.constraints.horizontal = true;
sketch.lines.constraints.horizontal = true;
sketch.lines.constraints.vertical = true;
sketch.lines.constraints.vertical = true;
// Connect corners (a.p2 == b.p1)
sketch.coincident_ll21.push;
sketch.coincident_ll21.push;
sketch.coincident_ll21.push;
sketch.coincident_ll21.push;
// Fix bottom-left corner and set dimensions
sketch.lines.p1 = fixed;
sketch.lines.constraints.has_length = true;
sketch.lines.constraints.length = 4.0;
sketch.lines.constraints.has_length = true;
sketch.lines.constraints.length = 2.0;
// Solve -- all constraints satisfied simultaneously
sketch.solve;
// bottom: (0,0)->(4,0), right: (4,0)->(4,2), top: (4,2)->(0,2), left: (0,2)->(0,0)
The sketch solver uses Levenberg-Marquardt optimization with drift regularization and robust drag constraints. Geometric constraints are differentiated at compile time; parametric expression dimensions use runtime differentiation via ExtendedModel.
Command Panel & Scripting
Press / to open the command panel. Full scripting support with 40+ commands for geometry creation, constraints, dimensions, parameters, introspection, and view control. Commands support expressions, coordinate references (L0.p2, @dx,dy), geometric functions (midpoint(L0), intersect(L0,L1)), and vector arithmetic (L0.p2 + normal(L0) * 3).
See arael-sketch/docs/COMMANDS.md for the full command reference.
AI Agent Integration (MCP)
The sketch editor embeds an MCP (Model Context Protocol) server, enabling AI agents like Claude Code to create and modify sketches programmatically. The AI sends sketch commands and reads state through the standard MCP tool interface.

Dark mode with parameters panel, command history showing MCP agent connection, and geometry drawn by Claude Code.
Start the editor with MCP enabled:
The --mcp-allow-all flag auto-approves OAuth connections from AI agents (recommended for local use). Without it, connections require manual approval in the GUI (not yet implemented).
Configure Claude Code (~/.claude.json):
The MCP server exposes tools for executing sketch commands (execute_command, execute_script), querying state (get_sketch_state), and reading documentation (get_help). The initialize response includes a condensed command reference that the AI loads into context automatically. File operations (save, load) are blocked for security.
See arael-sketch/ for the full implementation.
Project Structure
arael/ Main library
src/
model.rs Param<T>, Model trait, SelfBlock, CrossBlock, TripletBlock
simple_lm.rs LM solver, LmSolver trait, Dense/Band/Sparse backends, CooMatrix, CscMatrix
refs.rs Type-safe Vec<T>, Deque<T>, Arena<T>, Ref<T>
vect.rs vect2<T>, vect3<T>
matrix.rs matrix2<T>, matrix3<T>
quatern.rs quatern<T>
cpp/
eigen_sparse.cpp Eigen SimplicialLLT + CHOLMOD FFI bridge (optional)
arael-sym/ Symbolic math library
src/
lib.rs E type, constructors, operators
diff.rs Symbolic differentiation
simplify.rs Algebraic simplification
cse.rs Common subexpression elimination
eval.rs Evaluation, substitution, free variables
fmt.rs Display, LaTeX, Rust code generation
geo.rs Symbolic vectors/matrices (vect3sym, matrix3sym)
linalg.rs SymVec, SymMat, Jacobian
parse.rs Expression parser
arael-macros/ Procedural macros
src/
lib.rs #[arael::model], sym!, field rewriting
constraint.rs Constraint code generation, CSE integration
arael-sketch-solver/ 2D constraint solver library
src/
lib.rs Sketch root, solve(), entity management
entities.rs Point, Line, Arc types
constraints.rs 40+ cross-constraint types
dimensions.rs Dimension annotations
arael-sketch/ Interactive sketch editor application
src/
main.rs Entry points, EditorApp, core logic
actions.rs Action enum, undo-able operations
history.rs Undo/redo system
tools.rs Tool modes, selection, constraint types
drawing.rs Canvas rendering, grid, dimensions
colors.rs Color scheme (light/dark)
geometry.rs Coordinate transforms, snapping
License
See LICENSE.md.
