Engram
Neuroscience-grounded memory system for AI agents — Rust port of Engram.
Engram implements cognitive science models (ACT-R, Memory Chain Model, Ebbinghaus forgetting, Hebbian learning) for AI agent memory, providing a scientifically-grounded alternative to naive vector similarity search. Works with any Rust-based agent framework (IronClaw, Rig, custom agents).
Features
- ACT-R Activation: Retrieval based on frequency × recency (power law) + spreading activation
- Memory Chain Model: Dual-trace consolidation mimicking hippocampus → neocortex transfer
- Ebbinghaus Forgetting: Exponential decay with spaced repetition strengthening
- Hebbian Learning: Co-activation automatically forms associative links
- STDP (Spike-Timing-Dependent Plasticity): Temporal patterns infer causal relationships
- SQLite Storage: Persistent storage with FTS5 full-text search
- Zero External Dependencies: No embeddings, no API calls — pure cognitive models
Quick Start
Add to your Cargo.toml:
[]
= "0.1"
Basic Usage
use ;
Cognitive Models
ACT-R Activation
A_i = B_i + Σ(W_j · S_ji) + importance_boost - contradiction_penalty
B_i = ln(Σ_k t_k^(-d))
Memories with high frequency (many accesses) and recency (recent accesses) have higher base-level activation. Context keywords provide spreading activation boost.
Memory Chain Model
dr₁/dt = -μ₁ · r₁(t) (hippocampal trace, fast decay)
dr₂/dt = α · r₁(t) - μ₂ · r₂(t) (neocortical trace, slow decay)
During consolidation ("sleep"), working memories transfer to core memories. This prevents catastrophic forgetting while allowing gradual knowledge integration.
Ebbinghaus Forgetting
R(t) = e^(-t/S)
S = base_S × spacing_factor × importance_factor × consolidation_factor
Retrievability decays exponentially. Spaced repetition (repeated access at increasing intervals) dramatically increases stability.
Hebbian Learning
When memories are recalled together ≥ N times (default 3), they form an associative link. This creates an emergent semantic network purely from usage patterns.
STDP (Causal Inference)
Tracks temporal ordering of co-activated memories. If memory A consistently precedes B, infers potential causation A → B and creates a causal memory.
Configuration Presets
Engram includes scientifically-tuned presets for common agent archetypes:
use MemoryConfig;
// Chatbot: slow decay, high replay
let config = chatbot;
let mem = new?;
// Task agent: fast decay, focus on recent context
let config = task_agent;
// Personal assistant: very slow core decay, remember for months
let config = personal_assistant;
// Researcher: minimal forgetting, everything might be relevant
let config = researcher;
Preset Comparison
| Parameter | Default | Chatbot | Task Agent | Personal Assistant | Researcher |
|---|---|---|---|---|---|
mu1 (working decay) |
0.15 | 0.08 | 0.25 | 0.12 | 0.05 |
mu2 (core decay) |
0.005 | 0.003 | 0.01 | 0.001 | 0.001 |
alpha (consolidation) |
0.08 | 0.12 | 0.05 | 0.10 | 0.15 |
interleave_ratio |
0.3 | 0.4 | 0.1 | 0.3 | 0.5 |
forget_threshold |
0.01 | 0.005 | 0.02 | 0.005 | 0.001 |
API Reference
Core Methods
Memory::new(path, config)— Create or open databasemem.add(content, type, importance, source, metadata)— Store a memorymem.recall(query, limit, context, min_confidence)— Retrieve with ACT-Rmem.consolidate(days)— Run consolidation cycle ("sleep")mem.forget(memory_id, threshold)— Prune weak memoriesmem.reward(feedback, recent_n)— Apply dopaminergic feedbackmem.downscale(factor)— Global synaptic downscalingmem.stats()— Memory system statisticsmem.pin(memory_id)/mem.unpin(memory_id)— Pin/unpin memoriesmem.hebbian_links(memory_id)— Get associative neighbors
Memory Types
Factual— Facts and world knowledgeEpisodic— Events and experiencesRelational— Knowledge about people/entitiesEmotional— Emotionally significant memories (high importance, slow decay)Procedural— How-to knowledge (slow decay)Opinion— Subjective beliefs (moderate decay)Causal— Cause-effect relationships (auto-created via STDP)
Storage Schema
SQLite database with tables:
memories— Core memory data with strengths, timestamps, metadataaccess_log— Every access timestamp (for ACT-R base-level activation)hebbian_links— Co-activation tracking and formed linksmemories_fts— FTS5 full-text search index
IronClaw Integration
Engram can be used as a standalone crate or integrated with IronClaw (Rust AI agent framework).
Standalone Usage
use Memory;
let mut mem = new?;
// Use directly in your agent
As IronClaw Dependency
Add to your IronClaw tool/skill:
[]
= "0.1"
Then use in your agent's state:
Comparison with Python Engram
| Feature | Python Engram | Engram |
|---|---|---|
| ACT-R activation | ✅ | ✅ |
| Memory Chain Model | ✅ | ✅ |
| Ebbinghaus forgetting | ✅ | ✅ |
| Hebbian learning | ✅ | ✅ |
| STDP causal inference | ✅ | ✅ |
| Vector embeddings | ✅ (optional) | ⏳ (planned) |
| Performance | ~10ms recall | ~1ms recall |
| Memory footprint | ~50MB | ~5MB |
| Deployment | Requires Python | Single binary |
Why Cognitive Models?
Traditional AI memory systems use naive cosine similarity on embeddings. This ignores decades of neuroscience research on how human memory actually works:
- Frequency matters — memories accessed often are more retrievable
- Recency matters — recent memories are more accessible (but decay over time)
- Spaced repetition — repeated access at increasing intervals strengthens memories
- Consolidation — memories transfer from short-term (hippocampus) to long-term (neocortex)
- Hebbian associations — co-activated memories become linked
- Importance modulation — emotional significance affects memory strength
Engram implements these principles mathematically, providing memory dynamics that mirror human cognition.
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.
Citation
If you use Engram in research, please cite:
Original Python Engram:
Acknowledgments
Cognitive models based on:
- Anderson, J. R. (2007). How Can the Human Mind Occur in the Physical Universe? Oxford University Press. (ACT-R)
- Murre, J. M., & Chessa, A. G. (2011). Power laws from individual differences in learning and forgetting. (Memory Chain Model)
- Ebbinghaus, H. (1885). Memory: A Contribution to Experimental Psychology.
- Hebb, D. O. (1949). The Organization of Behavior.