Cortex
Local-first, auditable AI memory substrate.
Cortex records what your AI agent does into an append-only, hash-chained event ledger. It derives structured memories from those events, scores them by salience and proof state, and builds context packs that tell the model exactly what it's looking at — including what was excluded, redacted, or flagged as unverified. Every memory carries its truth ceiling: LocalUnsigned, SelfSigned, RemoteSigned. Nothing gets silently promoted.
The problem it solves: LLM sessions are ephemeral. Anything the model learns is gone when the context window closes. Existing solutions (vector databases, fine-tuning) either lose the audit trail or require cloud infrastructure. Cortex is the middle path: durable, inspectable, local-first memory with a policy lattice that prevents an AI from laundering a guess into trusted history.
Trust boundary: Rust owns validation, storage, scoring, and audit. Models propose interpretations and are always bounded by a ceiling.
Install
Requires Rust 1.88+ (rust-toolchain.toml pins it; rustup handles upgrades).
Platform notes: binary is cortex on Linux/macOS, cortex.exe on Windows.
See docs/INSTALL.md for upgrade, uninstall, and data-directory details.
Use with Claude Code (or any MCP client)
Add to .claude/settings.json in your project:
Zero-gate mode — AI manages the full memory lifecycle, no confirmation token required:
Manual mode — paste the confirmation token once per session (printed to stderr at startup):
24 tools are available immediately across three tiers (session, supervised, confirmed).
See docs/MCP_GUIDE.md for the full tool reference and parameter schemas.
Autonomous session pattern
session start: cortex_search("what you're working on") ← pull relevant memories
cortex_context() ← broader context pack if needed
… do your work …
session end: cortex_session_close({events_json: …}) ← index this session
cortex_session_commit() ← auto in zero-gate mode
Memories are searchable from the next session after commit — the same-session gate (ADR 0047) prevents an AI from using its own in-flight work as trusted prior art.
Setting up Cortex via an LLM
Paste docs/LLM_SETUP.md to any capable LLM and it will install, initialize, and configure Cortex for your project autonomously.
Documentation
| Doc | What it covers |
|---|---|
docs/INSTALL.md |
First-time install, upgrade, uninstall |
docs/QUICKSTART.md |
5-minute path to first memory |
docs/LLM_SETUP.md |
Copy-paste to any LLM to set up Cortex autonomously |
docs/AUTOMATION.md |
Zero-gate config — three ways to enable it |
docs/MCP_GUIDE.md |
All 24 MCP tools, parameter schemas, workflows |
docs/USER_GUIDE.md |
Full operator guide |
docs/WIRING_GUIDE.md |
End-to-end operator setup |
docs/RUNBOOK.md |
Operational procedures and drills |
docs/help/QUICK_REFERENCE.md |
One-liner cheat sheet |
docs/THREAT_MODEL.md |
Trust boundaries and residual risk |
CHANGELOG.md |
Release notes and behavioral-change log |
| API docs (docs.rs) | Rust crate API |
Architecture decisions: docs/adr/README.md · Design spec: docs/BUILD_SPEC.md
60-second demo
Uses the installed binary. Run from the cortex repo root (for the fixture file) or point SESSION at any session JSON.
# Ingest a one-event session (idempotent — safe to re-run).
SESSION="crates/cortex-cli/tests/fixtures/session-minimal.json"
# Verify the hash chain end-to-end.
# Deterministic reflection via the replay adapter (no LLM, no network).
Expected output:
cortex init: db = /tmp/demo.db (created)
appended evt_01ARZ3NDEKTSV4RRFFQ69G5FAV
appended_count = 1
audit verify: /tmp/demo.jsonl (1 rows scanned, 0 failures)
{"episode_candidates":[{"summary":"Demo trace: user greeted the cortex CLI.",...}]}
Ollama (optional — local LLM for reflection and semantic search)
Configure in ~/.config/cortex/config.toml (Linux/macOS) or %APPDATA%\cortex\config.toml (Windows):
[]
= "ollama"
[]
= "http://localhost:11434"
# Get the digest: ollama show --modelfile llama3.1:8b | grep "^FROM"
= "llama3.1:8b@sha256:<64-char-hex-digest>"
[]
= "ollama"
[]
= "nomic-embed-text"
The model ref must be digest-pinned — latest and plain tags are refused. See docs/WIRING_GUIDE.md §16b for the full config reference.
Workspace
13 crates published to crates.io under the crtx-* namespace:
| Crate | Role |
|---|---|
crtx-core |
IDs, errors, schema version constants |
crtx-ledger |
Append-only event log and hash chain |
crtx-store |
SQLite persistence: migrations, repositories |
crtx-memory |
Memory lifecycle, salience, decay, contradictions |
crtx-reflect |
Reflection orchestration (no doctrine promotion) |
crtx-llm |
Ollama, Claude, OpenAI-compat adapters behind one trait |
crtx-retrieval |
Hybrid retrieval: lexical + FTS5 + semantic |
crtx-context |
Context pack assembly with token budgeting |
crtx-runtime |
Child agent execution glue |
crtx-session |
Session-close pipeline |
crtx-mcp |
MCP stdio JSON-RPC 2.0 server + 24 tool handlers |
crtx-verifier |
Pure trust-evidence reducer (ADR 0041) |
crtx |
cortex CLI binary |
Development
Requires just (cargo install just).
Without it: cargo fmt --all -- --check && cargo clippy --workspace --all-targets -- -D warnings && cargo test --workspace && cargo deny check
CI runs Ubuntu + macOS + Windows in parallel via Azure DevOps Pipelines. Every push runs:
cargo check / clippy / fmt / test / denytaudit authority audit(gates on High+ findings)- Four drills: anchor, v1-to-v2, restore, axiom admission
Contributing
See CONTRIBUTING.md for dev setup, architecture overview, and how to add CLI commands, MCP tools, and migrations.
Security
See SECURITY.md to report vulnerabilities.
Trust model and boundaries: docs/THREAT_MODEL.md.
License
Licensed under either of Apache License, Version 2.0 or MIT license at your option.