RuVector Nervous System
A five-layer bio-inspired nervous system for AI applications. Think less "smart algorithm" and more "living organism."
What Is This?
Most AI systems are like assembly lines: data goes in, predictions come out, repeat forever. This crate takes a different approach. It gives your software a nervous system - the same kind of layered architecture that lets living creatures sense danger, react instantly, learn from experience, and rest when they need to.
The result? Systems that:
- React in microseconds instead of waiting for batch processing
- Learn from single examples instead of retraining on millions
- Stay quiet when nothing changes instead of burning compute continuously
- Know when they're struggling instead of failing silently
"From 'How do we make machines smarter?' to 'What kind of organism are we building?'"
The Five Layers
Every living nervous system has specialized layers. So does this one:
graph TD
subgraph "COHERENCE LAYER"
A1[Global Workspace]
A2[Oscillatory Routing]
A3[Predictive Coding]
end
subgraph "LEARNING LAYER"
B1[BTSP One-Shot]
B2[E-prop Online]
B3[EWC Consolidation]
end
subgraph "MEMORY LAYER"
C1[Hopfield Networks]
C2[HDC Vectors]
C3[Pattern Separation]
end
subgraph "REFLEX LAYER"
D1[K-WTA Competition]
D2[Dendritic Detection]
D3[Safety Gates]
end
subgraph "SENSING LAYER"
E1[Event Bus]
E2[Sparse Spikes]
E3[Backpressure]
end
A1 --> B1
A2 --> B2
A3 --> B3
B1 --> C1
B2 --> C2
B3 --> C3
C1 --> D1
C2 --> D2
C3 --> D3
D1 --> E1
D2 --> E2
D3 --> E3
| Layer | What It Does | Why It Matters |
|---|---|---|
| Sensing | Converts continuous data into sparse events | Only process what changed. 10,000+ events/ms throughput. |
| Reflex | Instant decisions via winner-take-all competition | <1μs response time. No thinking required. |
| Memory | Stores patterns in hyperdimensional space | 10^40 capacity. Retrieve similar patterns in <100ns. |
| Learning | One-shot and online adaptation | Learn immediately. No batch retraining. |
| Coherence | Coordinates what gets attention | 90-99% bandwidth savings. Global workspace for focus. |
Why This Architecture?
graph LR
subgraph Traditional["Traditional AI"]
T1[Batch Data] --> T2[Train Model]
T2 --> T3[Deploy]
T3 --> T4[Inference Loop]
T4 --> T1
end
subgraph NervousSystem["Nervous System"]
N1[Events] --> N2[Reflex]
N2 --> N3{Familiar?}
N3 -->|Yes| N4[Instant Response]
N3 -->|No| N5[Learn + Remember]
N5 --> N4
N4 --> N1
end
| Traditional AI | Nervous System |
|---|---|
| Always processing | Mostly quiet, reacts when needed |
| Learns from batches | Learns from single examples |
| Fails silently | Knows when it's struggling |
| Scales with more compute | Scales with better organization |
| Static after deployment | Adapts through use |
Features
Sensing Layer
Event Bus - Lock-free ring buffers with region-based sharding
- <100ns push/pop operations
- 10,000+ events/ms sustained throughput
- Automatic backpressure when overwhelmed
Reflex Layer
K-Winner-Take-All (K-WTA) - Instant decisions
- <1μs single winner selection for 1000 neurons
- Lateral inhibition for sparse activation
- HNSW-compatible routing
Dendritic Coincidence Detection - Temporal pattern matching
- NMDA-like nonlinearity with 10-50ms windows
- Plateau potentials for learning gates
- Reduced compartment models
Memory Layer
Hyperdimensional Computing (HDC) - Ultra-fast similarity
- 10,000-bit binary hypervectors
- XOR binding in <50ns
- Hamming similarity in <100ns via SIMD
- 10^40 representational capacity
Modern Hopfield Networks - Exponential pattern storage
- 2^(d/2) patterns in d dimensions
- Mathematically equivalent to transformer attention
- <1ms retrieval for 1000 patterns
Pattern Separation - Collision-free encoding
- Hippocampal dentate gyrus inspired
- 2-5% sparsity matching cortical statistics
- <1% collision rate
Learning Layer
BTSP (Behavioral Timescale Plasticity) - One-shot learning
- Learn from single exposure (1-3 second windows)
- Eligibility traces with bidirectional plasticity
- No batch training required
E-prop (Eligibility Propagation) - Online learning
- O(1) memory per synapse (12 bytes)
- 1000+ ms temporal credit assignment
- No backprop through time
EWC (Elastic Weight Consolidation) - Remember old tasks
- 45% forgetting reduction
- Fisher Information regularization
- Complementary Learning Systems
Coherence Layer
Oscillatory Routing - Phase-coupled communication
- Kuramoto oscillators for synchronization
- Communication gain based on phase alignment
- 40Hz gamma band coordination
Global Workspace - Focus of attention
- 4-7 item capacity (Miller's law)
- Broadcast/compete architecture
- Relevance-based ignition
Predictive Coding - Only transmit surprises
- 90-99% bandwidth reduction
- Precision-weighted prediction errors
- Hierarchical error propagation
Circadian Controller (NEW)
SCN-Inspired Duty Cycling - Rest when idle
- Phase-aligned activity (Active/Dawn/Dusk/Rest)
- 5-50× compute savings during quiet periods
- Hysteresis thresholds prevent flapping
- Budget guardrails for automatic deceleration
Nervous System Scorecard (NEW)
Five metrics that define system health:
| Metric | What It Measures | Target |
|---|---|---|
| Silence Ratio | How often the system stays calm | >70% |
| TTD P50/P95 | Time to decision latency | <1ms/<10ms |
| Energy per Spike | Efficiency per meaningful change | Minimize |
| Write Amplification | Memory writes per event | <3× |
| Calmness Index | Post-learning stability | >0.8 |
Examples: From Practical to SOTA
All examples are in the unified examples/tiers/ folder:
Tier 1: Ready to Ship Today
Tier 2: Transformative Applications
Tier 3: Exotic Research
Tier 4: SOTA Research Frontiers
Quick Start
Add to your Cargo.toml:
[]
= "0.1"
One-Shot Learning (BTSP)
use BTSPLayer;
// Create layer with 2-second learning window
let mut layer = new;
// Learn from single example
let pattern = vec!;
layer.one_shot_associate;
// Immediate recall - no training loop!
let output = layer.forward;
Ultra-Fast Similarity (HDC)
use ;
// 10,000-bit hypervectors
let apple = random;
let orange = random;
// Bind concepts (<50ns)
let fruit = apple.bind;
// Similarity check (<100ns)
let sim = apple.similarity;
// Store and retrieve
let mut memory = new;
memory.store;
let results = memory.retrieve;
Instant Decisions (WTA)
use WTALayer;
// 1000 competing neurons
let mut wta = new;
// Winner in <1μs
if let Some = wta.compete
Phase-Coupled Routing
use ;
// 40Hz gamma oscillators
let mut router = new;
router.step;
// Communication gain from phase alignment
let gain = router.communication_gain;
// Global workspace (4-7 items max)
let mut workspace = new;
workspace.broadcast;
Circadian Duty Cycling
use ;
// 24-hour cycle controller
let mut clock = new;
clock.set_coherence;
// Phase-aware compute decisions
if clock.should_compute
if clock.should_learn
if clock.should_consolidate
// Hysteresis: require 5 ticks above threshold
let mut tracker = new;
if tracker.update
// Budget: auto-decelerate when overspending
let mut budget = new;
budget.record_spend;
let duty = clock.duty_factor * budget.duty_multiplier;
Data Flow Architecture
sequenceDiagram
participant Sensors
participant EventBus
participant Reflex
participant Memory
participant Learning
participant Coherence
Sensors->>EventBus: Sparse events
EventBus->>Reflex: K-WTA competition
alt Familiar Pattern
Reflex->>Memory: Query HDC/Hopfield
Memory-->>Reflex: Instant match
Reflex->>Sensors: Immediate response
else Novel Pattern
Reflex->>Learning: BTSP/E-prop update
Learning->>Memory: Store new pattern
Learning->>Coherence: Request attention
Coherence->>Sensors: Coordinated response
end
Note over Coherence: Circadian controller gates all layers
Performance Benchmarks
| Component | Target | Achieved |
|---|---|---|
| HDC Binding | <50ns | 64ns |
| HDC Similarity | <100ns | ~80ns |
| WTA Single Winner | <1μs | <1μs |
| K-WTA (k=50) | <10μs | 2.7μs |
| Hopfield Retrieval | <1ms | <1ms |
| Pattern Separation | <500μs | <500μs |
| E-prop Synapse Memory | 8-12 bytes | 12 bytes |
| Event Bus | 10K events/ms | 10K+ events/ms |
| Circadian Savings | 5-50× | Phase-dependent |
Biological References
| Component | Research Basis |
|---|---|
| HDC | Kanerva 1988, Plate 2003 |
| Modern Hopfield | Ramsauer et al. 2020 |
| Pattern Separation | Rolls 2013, Dentate Gyrus |
| Dendritic Processing | Stuart & Spruston 2015 |
| BTSP | Bittner et al. 2017 |
| E-prop | Bellec et al. 2020 |
| EWC | Kirkpatrick et al. 2017 |
| Oscillatory Routing | Fries 2015 |
| Global Workspace | Baars 1988, Dehaene 2014 |
| Circadian Rhythms | Moore 2007, SCN research |
Documentation
- Architecture Guide - Complete crate layout
- Deployment Guide - Production deployment
- Test Plan - Benchmarks and quality
- Examples README - All tier examples
What You're Really Getting
This isn't about making AI faster or smarter in the traditional sense. It's about building systems that:
- Survive - Degrade gracefully instead of crashing
- Adapt - Learn through use, not retraining
- Rest - Stay quiet when nothing happens
- Know themselves - Sense when they're struggling
You're not shipping faster inference. You're shipping a system that stays quiet, waits, and then reacts with intent.
License
MIT License - See LICENSE
Contributing
Contributions welcome! Each module should include:
- Comprehensive unit tests
- Criterion benchmarks
- Documentation with biological context
- Examples demonstrating use cases