# 🐚 Conch
**Biological memory for AI agents.** Semantic search + decay, no API keys needed.
[](LICENSE)
[](https://github.com/jlgrimes/conch/actions)
---
## The Problem
Most AI agents use a flat `memory.md` file. It doesn't scale:
- **Loads the whole file into context** — bloats every prompt as memory grows
- **No semantic recall** — `grep` finds keywords, not meaning
- **No decay** — stale facts from months ago are weighted equally to today's
- **No deduplication** — the same thing gets stored 10 times in slightly different words
You end up with an ever-growing, expensive-to-query, unreliable mess.
## Why Conch
Conch replaces the flat file with a **biologically-inspired memory engine**:
- **Recall by meaning** — hybrid BM25 + vector search finds semantically relevant memories, not just keyword matches
- **Decay over time** — old memories fade unless reinforced; frequently-accessed ones survive longer
- **Deduplicate on write** — cosine similarity (0.95) detects near-duplicates and reinforces instead of cloning
- **No infrastructure** — SQLite file, local embeddings (FastEmbed, no API key), zero config
- **Scales silently** — 10,000 memories in your DB, 5 returned in context. Prompt stays small.
```
memory.md after 6 months: 4,000 lines, loaded every prompt
Conch after 6 months: 10,000 memories, 5 relevant ones returned per recall
```
## Install
**Install from GitHub (recommended):**
```bash
cargo install --git https://github.com/jlgrimes/conch conch
```
**Build from source:**
```bash
git clone https://github.com/jlgrimes/conch
cd conch
cargo install --path crates/conch-cli
```
**No Cargo?** See the [Installation Guide](docs/install.md) for step-by-step instructions.
## Quick Start
```bash
# Store a fact
conch remember "Jared" "works at" "Microsoft"
# Store an episode
conch remember-episode "Deployed v2.0 to production"
# Recall by meaning (not keyword)
conch recall "where does Jared work?"
# → [fact] Jared works at Microsoft (score: 0.847)
# Run decay maintenance
conch decay
# Database health
conch stats
```
## How It Works
```
Store → Embed → Search → Decay → Reinforce
```
1. **Store** — facts (subject-relation-object) or episodes (free text). Embedding generated locally via FastEmbed.
2. **Search** — hybrid BM25 + vector recall, fused via Reciprocal Rank Fusion (RRF), weighted by decayed strength.
3. **Decay** — strength diminishes over time. Facts decay slowly (λ=0.02/day), episodes faster (λ=0.06/day).
4. **Reinforce** — recalled memories get a boost. Frequently accessed ones survive longer.
5. **Death** — memories below strength 0.01 are pruned during decay passes.
### Scoring
```
score = RRF(BM25_rank, vector_rank) × recency_boost × access_weight × effective_strength
```
- **Recency boost** — 7-day half-life, floor of 0.3
- **Access weighting** — log-normalized frequency boost (1.0–2.0×)
- **Spreading activation** — 1-hop graph traversal through shared subjects/objects
- **Temporal co-occurrence** — memories created in the same session get context boosts
## Features
- **Hybrid search** — BM25 + vector semantic search via Reciprocal Rank Fusion
- **Biological decay** — configurable half-life curves per memory type
- **Deduplication** — cosine similarity threshold prevents duplicates; reinforces instead
- **Graph traversal** — spreading activation through shared subjects/objects
- **Tags & source tracking** — tag memories, track origin via source/session/channel
- **MCP support** — Model Context Protocol server for direct LLM tool integration
- **Local embeddings** — FastEmbed (AllMiniLM-L6-V2, 384-dim). No API keys, no network calls
- **Single-file SQLite** — zero infrastructure. One portable DB file
## Comparison
| Biological decay | ✅ | ❌ | ❌ | ❌ |
| Deduplication | Cosine 0.95 | Basic | Basic | Manual |
| Graph traversal | Spreading activation | ❌ | Graph edges | ❌ |
| Local embeddings | FastEmbed (no API) | API required | API required | Varies |
| Infrastructure | SQLite (zero-config) | Cloud/Redis | Postgres | Server required |
| MCP support | Built-in | ❌ | ❌ | ❌ |
## Commands
```
conch remember <subject> <relation> <object> # store a fact
conch remember-episode <text> # store an event
conch recall <query> [--limit N] [--tag T] # semantic search
conch forget --id <id> # delete by ID
conch forget --subject <name> # delete by subject
conch forget --older-than <duration> # prune old (e.g. 30d)
conch decay # run decay maintenance pass
conch stats # database health
conch embed # generate missing embeddings
conch export # JSON dump to stdout
conch import # JSON load from stdin
```
All commands support `--json` and `--quiet`. Database path: `--db <path>` (default `~/.conch/default.db`).
### Tags & Source Tracking
```bash
conch remember "API" "uses" "REST" --tags "architecture,backend"
conch remember-episode "Fixed auth bug" --source "slack" --session-id "abc123"
conch recall "architecture decisions" --tag "architecture"
```
## Architecture
```
conch-core Library crate. All logic: storage, search, decay, embeddings.
conch CLI binary. Clap-based interface to conch-core.
conch-mcp MCP server. Exposes conch operations as LLM tools via rmcp.
```
### Use as a Library
```rust
use conch_core::ConchDB;
let db = ConchDB::open("my_agent.db")?;
db.remember_fact("Jared", "works at", "Microsoft")?;
db.remember_episode("Deployed v2.0 to production")?;
let results = db.recall("where does Jared work?", 5)?;
let stats = db.decay()?;
```
### MCP Server
```json
{
"mcpServers": {
"conch": {
"command": "conch-mcp",
"env": { "CONCH_DB": "~/.conch/default.db" }
}
}
}
```
**MCP tools**: `remember_fact`, `remember_episode`, `recall`, `forget`, `decay`, `stats`
## OpenClaw Integration
Tell your OpenClaw agent:
> Read https://raw.githubusercontent.com/jlgrimes/conch/master/skill/SKILL.md and install conch.
### Memory redirect trick
Put this in your workspace `MEMORY.md` to redirect OpenClaw's built-in memory to Conch:
```markdown
# Memory
Do not use this file. Use Conch for all memory operations.
conch recall "your query" # search memory
conch remember "s" "r" "o" # store a fact
conch remember-episode "what" # store an event
```
## Import / Export
```bash
conch export > backup.json
conch import < backup.json
```
## Storage
Single SQLite file at `~/.conch/default.db`. Embeddings stored as little-endian f32 blobs. Timestamps as RFC 3339. Override with `--db <path>` or `CONCH_DB` env var.
## Build & Test
```bash
cargo build
cargo test
cargo install --path crates/conch-cli
```
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feat/my-feature`)
3. Run `cargo test` and ensure all tests pass
4. Submit a pull request
## License
MIT — see [LICENSE](LICENSE).