
NORU
NNUE On RUst — Zero-dependency NNUE training & inference library in pure Rust.
What is NNUE?
NNUE (Efficiently Updatable Neural Network) is a neural network architecture designed for fast evaluation in game engines. Originally developed for Shogi and adopted by Stockfish, NNUE enables real-time neural network inference through incremental accumulator updates.
What is NORU?
NORU is a game-agnostic NNUE library that provides both training and inference in a single, dependency-free Rust crate. Configure your network dimensions at runtime via NnueConfig — no recompilation needed.
Key Features
- Multi-hidden-layer — Arbitrary depth networks (e.g.
&[256, 32, 32]) - CReLU + SCReLU — Squared Clipped ReLU for stronger accumulator activation
- SIMD-accelerated inference — AVX2 (x86_64), NEON (aarch64), with scalar fallback
- Training + Inference — FP32 backpropagation with Adam optimizer, i16 quantized inference
- Zero dependencies — Pure Rust, no PyTorch, no CUDA, no C bindings
- Game-agnostic — Runtime-configurable network dimensions via
NnueConfig - Incremental updates — Efficient accumulator add/remove for search trees
- Quantization — Automatic FP32 → i16 conversion for deployment
- Binary format v2 — Versioned model serialization with auto-detection
Quick Start
Add to your Cargo.toml:
[]
= "1.0"
Training
use ;
use ;
// 1. Define your network dimensions
let config = NnueConfig ;
// 2. Initialize weights
let mut rng = new;
let mut weights = init_random;
let mut adam = new;
// 3. Train on samples
let sample = TrainingSample ;
let fwd = weights.forward;
let mut grad = new;
weights.backward; // BCE loss
weights.adam_update;
// 4. Quantize for deployment
let inference_weights = weights.quantize; // FP32 → i16
Inference
use ;
use ;
// Load quantized weights (v2 format auto-detected)
let weights = load_from_bytes?;
// Or with legacy format (requires config)
let weights = load_from_bytes?;
// Evaluate a position
let mut acc = new;
acc.refresh;
let eval: i32 = forward;
// Incremental update (for search trees)
let mut delta_stm = new;
delta_stm.add;
delta_stm.remove;
acc.update_incremental;
Save / Load Models
// Save
let bytes = weights.save_to_bytes; // v2 format with NORU header
write?;
// Load (auto-detects v2 header)
let data = read?;
let weights = load_from_bytes?;
Architecture
Input (sparse features)
↓
Feature Transform: [feature_size] → [accumulator_size] (per perspective)
↓
CReLU or SCReLU
↓
Concat: [accumulator_size × 2] (STM + NSTM perspectives)
↓
Hidden Layer₁ → CReLU → Hidden Layer₂ → ... → Hidden Layerₙ → CReLU
↓
Output Layer → 1 (evaluation score)
All dimensions are configured at runtime:
// Simple (single hidden layer)
let config = NnueConfig ;
// Stockfish-style (multi-layer + SCReLU)
let config = NnueConfig ;
SIMD Acceleration
Inference is automatically accelerated on supported platforms:
| Platform | Instruction Set | Width | Auto-detected |
|---|---|---|---|
| x86_64 | AVX2 | 256-bit (16 × i16) | Runtime |
| aarch64 | NEON | 128-bit (8 × i16) | Compile-time |
| Other | Scalar | — | Fallback |
No configuration needed — the fastest available path is selected automatically.
API Reference
noru::config
| Type | Description |
|---|---|
NnueConfig |
Network dimensions and activation type (static hidden_sizes) |
OwnedNnueConfig |
Runtime-constructible variant with Vec<usize> hidden sizes; convert via .leak() |
Activation |
Activation function enum (CReLU, SCReLU) |
noru::network (Inference, i16)
| Type / Function | Description |
|---|---|
NnueWeights |
Quantized i16 weights for inference |
NnueWeights::load_from_bytes() |
Load weights from binary (v2 auto-detect) |
NnueWeights::save_to_bytes() |
Save weights to v2 binary format |
Accumulator |
Maintains per-perspective activation sums |
Accumulator::refresh() |
Full recomputation from feature list |
Accumulator::update_incremental() |
Efficient add/remove update |
Accumulator::swap() |
Swap STM/NSTM perspectives |
FeatureDelta |
Tracks added/removed features for incremental updates |
forward() |
Full forward pass: Accumulator → Hidden layers → Output |
noru::trainer (Training, FP32)
| Type / Function | Description |
|---|---|
TrainableWeights |
FP32 weights with training methods |
TrainableWeights::init_random() |
Kaiming initialization |
TrainableWeights::forward() |
FP32 forward pass with intermediate results |
TrainableWeights::backward() |
Backpropagation (BCE loss) |
TrainableWeights::backward_mse() |
Backpropagation (MSE loss) |
TrainableWeights::adam_update() |
Adam optimizer step |
TrainableWeights::quantize() |
FP32 → i16 for deployment |
AdamState |
Adam optimizer momentum/velocity state |
Gradients |
Gradient accumulation buffer |
TrainingSample |
Training data (features + target) |
SimpleRng |
Built-in xorshift64 RNG (no external dependency) |
noru::simd
| Function | Description |
|---|---|
vec_add_i16() |
Saturating i16 vector addition |
vec_sub_i16() |
Saturating i16 vector subtraction |
vec_clipped_relu() |
ClippedReLU activation (clamp to 0..127) |
dot_i16_i32() |
i16 dot product with i32 accumulation |
dot_screlu_i64() |
SCReLU squared dot product with i64 accumulation |
noru::quant
| Constant / Function | Description |
|---|---|
WEIGHT_SCALE (64) |
FP32 → i16 quantization scale |
ACTIVATION_SCALE (256) |
Accumulator → Hidden scale |
OUTPUT_SCALE (16) |
Final output scale |
clipped_relu() |
ClippedReLU activation |
screlu_f32() |
Squared ClippedReLU (f32) |
saturate_i16() |
Safe i32 → i16 conversion |
Building
# Library
# Run tests
# Generate documentation
Design Decisions
- No GPU — Designed for real-time game AI on CPU. NNUE's strength is being fast enough for depth-4+ search on consumer hardware.
- No external dependencies — Even the RNG is built-in (xorshift64). This means
cargo add norujust works, everywhere. - SCReLU on first layer only — Following the Stockfish pattern, SCReLU is applied to the accumulator output. Subsequent hidden layers always use CReLU to avoid numerical issues in narrow layers.
- Output-major weight layout — Hidden layer weights are stored transposed (output-major) for contiguous SIMD memory access in dot products.
- Vec<T> over fixed arrays — All weights use heap-allocated vectors for runtime flexibility. Slight overhead vs compile-time arrays, but enables one binary for any game.
- Sparse feature input — Features are passed as active index lists, not dense vectors. This matches NNUE's design for board games where most features are inactive.
License
Licensed under either of
at your option.
Related Projects
- Stockfish NNUE — The chess engine that popularized NNUE
- bullet — GPU-accelerated NNUE training (Rust + CUDA)
- Rapfi — Gomoku engine with advanced NNUE