neuromod
Reward-modulated spiking neural networks with biologically plausible learning.
neuromod gives you LIF and Izhikevich neuron models, Poisson spike encoding,
spike-timing-dependent plasticity (STDP), and a full neuromodulator system
(dopamine, cortisol, acetylcholine) — all in a zero-unsafe, serde-ready Rust crate.
Quick start
[]
= "0.1"
use SpikingInferenceEngine;
use TelemetryFrame;
Core types
| Type | Module | Description |
|---|---|---|
LifNeuron |
neurons |
Leaky Integrate-and-Fire — the fast, reactive workhorse |
IzhikevichNeuron |
neurons |
Two-variable biophysical model; supports bursting, chattering, etc. |
PoissonEncoder |
neurons |
Converts a scalar intensity into a stochastic spike train |
apply_stdp |
stdp |
Exponential STDP window, gated by a dopamine_lr reward signal |
synaptic_scaling |
stdp |
L1-norm weight normalization (homeostatic plasticity) |
NeuroModulators |
modulators |
Dopamine / cortisol / acetylcholine / tempo state |
TelemetryFrame |
modulators |
Hardware snapshot that drives modulator levels each tick |
RewardEvent |
modulators |
Discrete WorkAccepted / BreakthroughFound / SourceSwitch events |
SpikingInferenceEngine |
engine |
Full 8-LIF + 5-Iz SNN with STDP and homeostatic adaptation |
FaultClass |
diagnostics |
Hardware fault classification enum with canonical error codes |
FpgaMetrics |
diagnostics |
WNS parser for Vivado timing summary reports |
How STDP works
Classic exponential STDP implements Hebb's rule with a timer:
Δw = A⁺ · exp(−Δt / τ⁺) if pre fires before post → LTP (potentiate)
Δw = −A⁻ · exp( Δt / τ⁻) if post fires before pre → LTD (depress)
neuromod multiplies every Δw by a dopamine_lr scalar so that reward
(high dopamine) gates how much the network learns on each step — zero
dopamine means zero weight change regardless of timing.
use LifNeuron;
use apply_stdp;
let mut neurons = vec!;
neurons.weights = vec!;
neurons.last_spike_time = 10;
let pre_times = vec!; // pre fired before post → LTP
apply_stdp; // 80% dopamine gate
assert!;
Persistence (save / load)
All public types derive serde::Serialize and serde::Deserialize, so you
can checkpoint and restore a running engine in five lines:
// Save
engine.save_parameters?;
// Restore
let mut engine2 = new;
engine2.load_parameters?;
Or serialize individual neurons directly:
let json = to_string_pretty?;
let restored: = from_str?;
vs. spiking_neural_networks
spiking_neural_networks (v0.24, ~29k downloads)
focuses on biophysical fidelity — Hodgkin-Huxley conductance models, ion channels,
detailed compartmental neurons.
neuromod focuses on production-ready reinforcement learning:
- Reward-modulated STDP — dopamine gates every weight update
- Full neuromodulator state — dopamine, cortisol, acetylcholine, tempo derived from hardware telemetry each tick
- Homeostatic plasticity — threshold adaptation + L1 synaptic scaling prevent runaway excitation out of the box
- First-class persistence —
serde+ JSON roundtrip on every public type
License
Licensed under either of Apache License 2.0 or MIT license at your option.