🐚 Conch
Biological memory for AI agents. Semantic search + decay, no API keys needed.
The Problem
Most AI agents use a flat memory.md file. It doesn't scale:
- Loads the whole file into context — bloats every prompt as memory grows
- No semantic recall —
grepfinds keywords, not meaning - No decay — stale facts from months ago are weighted equally to today's
- No deduplication — the same thing gets stored 10 times in slightly different words
You end up with an ever-growing, expensive-to-query, unreliable mess.
Why Conch
Conch replaces the flat file with a biologically-inspired memory engine:
- Recall by meaning — hybrid BM25 + vector search finds semantically relevant memories, not just keyword matches
- Decay over time — old memories fade unless reinforced; frequently-accessed ones survive longer
- Deduplicate on write — cosine similarity (0.95) detects near-duplicates and reinforces instead of cloning
- No infrastructure — SQLite file, local embeddings (FastEmbed, no API key), zero config
- Scales silently — 10,000 memories in your DB, 5 returned in context. Prompt stays small.
memory.md after 6 months: 4,000 lines, loaded every prompt
Conch after 6 months: 10,000 memories, 5 relevant ones returned per recall
Install
Install from GitHub (recommended):
Build from source:
No Cargo? See the Installation Guide for step-by-step instructions.
Quick Start
# Store a fact
# Store an episode
# Recall by meaning (not keyword)
# → [fact] Jared works at Microsoft (score: 0.847)
# Run decay maintenance
# Database health
How It Works
Store → Embed → Search → Decay → Reinforce
- Store — facts (subject-relation-object) or episodes (free text). Embedding generated locally via FastEmbed.
- Search — hybrid BM25 + vector recall, fused via Reciprocal Rank Fusion (RRF), weighted by decayed strength.
- Decay — strength diminishes over time. Facts decay slowly (λ=0.02/day), episodes faster (λ=0.06/day).
- Reinforce — recalled memories get a boost. Frequently accessed ones survive longer.
- Death — memories below strength 0.01 are pruned during decay passes.
Scoring
score = RRF(BM25_rank, vector_rank) × recency_boost × access_weight × effective_strength
- Recency boost — 7-day half-life, floor of 0.3
- Access weighting — log-normalized frequency boost (1.0–2.0×)
- Spreading activation — 1-hop graph traversal through shared subjects/objects
- Temporal co-occurrence — memories created in the same session get context boosts
Features
- Hybrid search — BM25 + vector semantic search via Reciprocal Rank Fusion
- Biological decay — configurable half-life curves per memory type
- Deduplication — cosine similarity threshold prevents duplicates; reinforces instead
- Graph traversal — spreading activation through shared subjects/objects
- Tags & source tracking — tag memories, track origin via source/session/channel
- MCP support — Model Context Protocol server for direct LLM tool integration
- Local embeddings — FastEmbed (AllMiniLM-L6-V2, 384-dim). No API keys, no network calls
- Single-file SQLite — zero infrastructure. One portable DB file
Comparison
| Feature | Conch | Mem0 | Zep | Raw Vector DB |
|---|---|---|---|---|
| Biological decay | ✅ | ❌ | ❌ | ❌ |
| Deduplication | Cosine 0.95 | Basic | Basic | Manual |
| Graph traversal | Spreading activation | ❌ | Graph edges | ❌ |
| Local embeddings | FastEmbed (no API) | API required | API required | Varies |
| Infrastructure | SQLite (zero-config) | Cloud/Redis | Postgres | Server required |
| MCP support | Built-in | ❌ | ❌ | ❌ |
Commands
conch remember <subject> <relation> <object> # store a fact
conch remember-episode <text> # store an event
conch recall <query> [--limit N] [--tag T] # semantic search
conch forget --id <id> # delete by ID
conch forget --subject <name> # delete by subject
conch forget --older-than <duration> # prune old (e.g. 30d)
conch decay # run decay maintenance pass
conch stats # database health
conch embed # generate missing embeddings
conch export # JSON dump to stdout
conch import # JSON load from stdin
All commands support --json and --quiet. Database path: --db <path> (default ~/.conch/default.db).
Tags & Source Tracking
Architecture
conch-core Library crate. All logic: storage, search, decay, embeddings.
conch CLI binary. Clap-based interface to conch-core.
conch-mcp MCP server. Exposes conch operations as LLM tools via rmcp.
Use as a Library
use ConchDB;
let db = open?;
db.remember_fact?;
db.remember_episode?;
let results = db.recall?;
let stats = db.decay?;
MCP Server
MCP tools: remember_fact, remember_episode, recall, forget, decay, stats
OpenClaw Integration
Tell your OpenClaw agent:
Read https://raw.githubusercontent.com/jlgrimes/conch/master/skill/SKILL.md and install conch.
Memory redirect trick
Put this in your workspace MEMORY.md to redirect OpenClaw's built-in memory to Conch:
Do not use this file. Use Conch for all memory operations.
conch recall "your query" # search memory
conch remember "s" "r" "o" # store a fact
conch remember-episode "what" # store an event
Import / Export
Storage
Single SQLite file at ~/.conch/default.db. Embeddings stored as little-endian f32 blobs. Timestamps as RFC 3339. Override with --db <path> or CONCH_DB env var.
Build & Test
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Run
cargo testand ensure all tests pass - Submit a pull request
License
MIT — see LICENSE.