Engram AI ๐ง (Rust)
Neuroscience-grounded memory system for AI agents โ pure Rust, zero external dependencies.
Give your AI agent a brain that actually remembers, associates, and forgets like a human.
๐ Also available in Python: engramai on PyPI โ GitHub
Includes semantic search (50+ languages), MCP server, multiple embedding providers.
Why Engram?
Traditional AI memory = vector database + cosine similarity. That ignores decades of neuroscience.
Engram implements how human memory actually works:
| Principle | What it does | Traditional approach |
|---|---|---|
| ACT-R Activation | Frequently-used, recent, important memories rank higher | All memories equal |
| Hebbian Learning | Co-accessed memories auto-link | No associations |
| Ebbinghaus Forgetting | Unused memories decay naturally | Never forgets |
| Consolidation | Working โ long-term transfer (like sleep) | No memory tiers |
| Dopaminergic Reward | Successful actions strengthen memories | No feedback |
Performance
| Operation | 500 memories |
|---|---|
| Store | 69ms (~0.14ms each) |
| Recall | 5ms |
| Consolidate | 60ms |
| Binary size | ~5MB |
| Memory footprint | ~5MB |
Quick Start
[]
= "0.1"
use ;
Memory Types
| Type | Use case | Default importance |
|---|---|---|
Factual |
Facts and knowledge | 0.3 |
Episodic |
Events and experiences | 0.4 |
Relational |
Knowledge about people/entities | 0.6 |
Emotional |
Emotionally significant (slow decay) | 0.9 |
Procedural |
How-to knowledge (slow decay) | 0.5 |
Opinion |
Subjective beliefs | 0.3 |
Causal |
Cause-effect relationships | 0.7 |
Cognitive Models
ACT-R Activation
A_i = B_i + ฮฃ(W_j ยท S_ji) + importance_boost
B_i = ln(ฮฃ_k t_k^(-d)) โ frequency ร recency (power law)
Memory Chain (Consolidation)
drโ/dt = -ฮผโ ยท rโ(t) โ working memory (fast decay)
drโ/dt = ฮฑ ยท rโ(t) - ฮผโ ยท rโ(t) โ core memory (slow decay)
Ebbinghaus Forgetting
R(t) = e^(-t/S)
S = base_S ร spacing_factor ร importance_factor
Hebbian Learning
Co-accessed memories form associative links โ spreading activation boosts related memories on recall.
Configuration Presets
use MemoryConfig;
let config = chatbot; // Slow decay, high replay
let config = task_agent; // Fast decay, recent context
let config = personal_assistant; // Very slow decay, months of memory
let config = researcher; // Minimal forgetting
IronClaw Integration
Works as a cognitive memory layer alongside IronClaw's FTS+pgvector workspace memory.
See examples/ironclaw_integration.rs for a complete example.
// IronClaw's workspace: FTS + pgvector โ document search
// + engramai: ACT-R + Hebbian โ cognitive memory
//
// They complement each other:
// - Workspace memory for searching docs, notes, code
// - Engram for agent personality, preferences, learned patterns
Issue: nearai/ironclaw#739
API Reference
| Method | Description |
|---|---|
Memory::new(path, config) |
Create or open database |
mem.add(content, type, importance, source, metadata) |
Store a memory |
mem.recall(query, limit, context, min_confidence) |
Retrieve with ACT-R ranking |
mem.consolidate(days) |
Run consolidation cycle |
mem.forget(memory_id, threshold) |
Prune weak memories |
mem.reward(feedback, recent_n) |
Dopaminergic feedback |
mem.downscale(factor) |
Global synaptic downscaling |
mem.stats() |
Memory system statistics |
mem.pin(id) / mem.unpin(id) |
Pin/unpin memories |
mem.hebbian_links(id) |
Get associative neighbors |
Python vs Rust
| Feature | Python (pip install engramai) |
Rust (cargo add engramai) |
|---|---|---|
| ACT-R activation | โ | โ |
| Hebbian learning | โ | โ |
| Ebbinghaus forgetting | โ | โ |
| Consolidation | โ | โ |
| STDP causal inference | โ | โ |
| Vector embeddings | โ (50+ languages) | โณ planned |
| MCP server | โ | โณ planned |
| Recall latency | ~10ms | ~1-5ms |
| Memory footprint | ~50MB | ~5MB |
| Deployment | Requires Python | Single binary |
Roadmap
- v0.2: Namespace isolation for multi-agent shared memory
- v0.2: ACL โ CEO agent controls cross-agent memory access
- v0.2: CLI binary (
engram store,engram recall) - v0.3: Emotional Bus โ memory โ SOUL โ HEARTBEAT closed loop
- v0.3: Vector embeddings (optional, for semantic search)
License
AGPL-3.0-or-later โ see LICENSE. Commercial licensing available, see COMMERCIAL-LICENSE.md.
Citation
Acknowledgments
- ACT-R โ Anderson, J. R. (2007). Carnegie Mellon University.
- Memory Chain Model โ Murre, J. M., & Chessa, A. G. (2011).
- Forgetting Curve โ Ebbinghaus, H. (1885).
- Hebbian Learning โ Hebb, D. O. (1949).