# noid
Neural boids. A small neural network replaces Reynolds' three hand-tuned flocking rules (separation, alignment, cohesion) with 1,922 learned parameters.
Each agent perceives its 5 nearest neighbors (topological, not metric) and feeds 24 numbers into a 2-layer MLP that outputs a 2D steering acceleration. No explicit rules. The behavior emerges from the weights.
## Usage
```rust
use noid::{Config, Flock};
let mut flock = Flock::new(Config::default(), 42);
loop {
flock.tick(1.0 / 60.0);
for pos in &flock.positions {
// render at pos[0], pos[1]
}
}
```
Training is built in:
```rust
use noid::train::Trainer;
let mut trainer = Trainer::new(Config::default(), 42);
for _ in 0..5000 {
trainer.step(0.001);
}
trainer.brain().save_json();
```
Weight interpolation:
```rust
use noid::Brain;
let chaos = Brain::random(42);
let order = Brain::load_json(include_str!("weights.json"));
let halfway = Brain::lerp(&chaos, &order, 0.5);
```
## Architecture
```
obs(24) → Linear(24×32) → SiLU → Linear(32×32) → SiLU → Linear(32×2) → tanh×60
```
The observation vector:
- Own velocity: `[vx, vy]`
- Own heading: `[sin θ, cos θ]`
- 5 nearest neighbors, each: `[Δx, Δy, Δvx, Δvy]`
## Training
Imitation learning from classic boids. Generate random flock configurations, compute what Reynolds' three rules would do, train the network to match. 3,000 steps is enough.
## Performance
`tick_fast()` uses spatial hashing for O(n) amortized neighbor search and a batched forward pass. The neural inference is a matrix multiplication — one GPU dispatch for the entire flock.
| steering (per agent) | 527 ns | 13 ns |
| full tick, n=1024 | 1.9 ms | 1.1 ms |
| full tick, n=4096 | 17 ms | 14 ms |
The steering itself is 40x more expensive, but neighbor search dominates at scale, so the total gap is ~1.2x on CPU. On GPU, the matmul maps directly to shader cores.
## License
MIT