Sediment
Semantic memory for AI agents. Local-first, MCP-native.
Combines vector search, a relationship graph, and access tracking into a unified memory intelligence layer — all running locally as a single binary.
Why Sediment?
- Single binary, zero config — no Docker, no Postgres, no Qdrant. Just
sediment. - Sub-25ms recall — local embeddings and vector search, no network round-trips.
- 5-tool focused API —
store,recall,list,forget,connections. That's it. - Works everywhere — macOS (Intel + ARM), Linux x86_64. All data stays on your machine.
Comparison
| Sediment | OpenMemory MCP | mcp-memory-service | |
|---|---|---|---|
| Install | Single binary | Docker + Postgres + Qdrant | Python + pip |
| Dependencies | None | 3 services | Python runtime + deps |
| Tools | 5 | 10+ | 24 |
| Embeddings | Local (all-MiniLM-L6-v2) | API-dependent | API-dependent |
| Graph features | Built-in | No | No |
| Memory decay | Built-in | No | No |
Install
# Via crates.io
# Via Homebrew
# Via shell installer
|
# From source
Setup
Add Sediment to your MCP client configuration:
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
Claude Code
Run sediment init in your project, or add manually to ~/.claude/settings.json:
Cursor
Add to .cursor/mcp.json in your project:
VS Code (Copilot)
Add to .vscode/mcp.json in your project:
Windsurf
Add to ~/.codeium/windsurf/mcp_config.json:
JetBrains IDEs
Go to Settings > Tools > AI Assistant > MCP Servers, click +, and add:
Tools
| Tool | Description |
|---|---|
store |
Save content with optional title, tags, metadata, expiration, scope, replace, and related item links |
recall |
Search memories by semantic similarity with decay scoring, trust weighting, graph expansion, and co-access suggestions |
list |
List stored items by scope (project/global/all) with tag filtering |
forget |
Delete an item by ID (removes from vector store and graph) |
connections |
Show relationship graph for an item (related, supersedes, co-accessed edges) |
CLI
How It Works
Three-Database Hybrid
All local, embedded, zero config:
- LanceDB — Vector embeddings and semantic similarity search
- SQLite (graph) — Relationship tracking: RELATED, SUPERSEDES, CO_ACCESSED, CLUSTER_SIBLING edges
- SQLite (access) — Mutable counters: access tracking, decay scoring, consolidation queue
Key Features
- Memory decay: Results re-ranked by freshness (30-day half-life) and access frequency. Old memories rank lower but are never auto-deleted.
- Trust-weighted scoring: Validated and well-connected memories score higher.
- Project scoping: Automatic context isolation between projects. Same-project items get a similarity boost.
- Relationship graph: Items linked via RELATED, SUPERSEDES, and CO_ACCESSED edges. Recall expands results with 1-hop graph neighbors and co-access suggestions.
- Background consolidation: Near-duplicates (≥0.95 similarity) auto-merged; similar items (0.85–0.95) linked.
- Auto-tagging: Items without tags inherit tags from similar existing items.
- Type-aware chunking: Intelligent splitting for markdown, code, JSON, YAML, and plain text.
- Conflict detection: Items with ≥0.85 similarity flagged on store.
- Cross-project recall: Results from other projects flagged with provenance metadata.
- Local embeddings: all-MiniLM-L6-v2 via Candle (384-dim vectors, no API keys).
Performance
Sub-25ms recall latency at 10K items with full graph features enabled. See BENCHMARKS.md for detailed numbers.
| DB Size | Graph Off | Graph On |
|---|---|---|
| 100 | ~8ms | ~10ms |
| 1,000 | ~12ms | ~15ms |
| 10,000 | ~18ms | ~23ms |
Data Location
- Vector store:
~/.sediment/data/ - Graph + access tracking:
~/.sediment/access.db
Everything runs locally. Your data never leaves your machine.
Contributing
See CONTRIBUTING.md for build instructions and PR guidelines.