crtx 0.1.1

CLI for the Cortex supervisory memory substrate.
# Cortex

[![CI](https://img.shields.io/azure-devops/build/0ryant/cortex/18/main)](https://dev.azure.com/0ryant/cortex/_build/latest?definitionId=18&branchName=main)
[![Crates.io](https://img.shields.io/crates/v/crtx)](https://crates.io/crates/crtx)
[![MSRV](https://img.shields.io/badge/rustc-1.88+-blue)](rust-toolchain.toml)
[![License](https://img.shields.io/badge/license-MIT%20OR%20Apache--2.0-blue)](LICENSE)

**Local-first, auditable AI memory substrate.**

Cortex records what your AI agent does into an append-only, hash-chained event ledger. It derives structured memories from those events, scores them by salience and proof state, and builds context packs that tell the model exactly what it's looking at — including what was excluded, redacted, or flagged as unverified. Every memory carries its truth ceiling: `LocalUnsigned`, `SelfSigned`, `RemoteSigned`. Nothing gets silently promoted.

**The problem it solves:** LLM sessions are ephemeral. Anything the model learns is gone when the context window closes. Existing solutions (vector databases, fine-tuning) either lose the audit trail or require cloud infrastructure. Cortex is the middle path: durable, inspectable, local-first memory with a policy lattice that prevents an AI from laundering a guess into trusted history.

**Trust boundary:** Rust owns validation, storage, scoring, and audit. Models propose interpretations and are always bounded by a ceiling.

---

## Install

```bash
cargo install crtx          # installs the `cortex` binary
cortex --version
cortex init                 # creates ~/.local/share/cortex/cortex.db + events.jsonl
cortex doctor --strict      # verify the store is healthy
```

Requires **Rust 1.88+** (`rust-toolchain.toml` pins it; `rustup` handles upgrades).  
Platform notes: binary is `cortex` on Linux/macOS, `cortex.exe` on Windows.  
See [`docs/INSTALL.md`](docs/INSTALL.md) for upgrade, uninstall, and data-directory details.

---

## Use with Claude Code (or any MCP client)

Add to `.claude/settings.json` in your project:

**Zero-gate mode** — AI manages the full memory lifecycle, no confirmation token required:

```json
{
  "mcpServers": {
    "cortex": {
      "command": "cortex",
      "args": ["serve"],
      "env": { "CORTEX_MCP_AUTO_COMMIT": "1" }
    }
  }
}
```

**Manual mode** — paste the confirmation token once per session (printed to stderr at startup):

```json
{
  "mcpServers": {
    "cortex": {
      "command": "cortex",
      "args": ["serve"]
    }
  }
}
```

**24 tools** are available immediately across three tiers (session, supervised, confirmed).  
See [`docs/MCP_GUIDE.md`](docs/MCP_GUIDE.md) for the full tool reference and parameter schemas.

### Autonomous session pattern

```
session start:  cortex_search("what you're working on")   ← pull relevant memories
                cortex_context()                          ← broader context pack if needed
                … do your work …
session end:    cortex_session_close({events_json: …})    ← index this session
                cortex_session_commit()                   ← auto in zero-gate mode
```

Memories are searchable from the **next** session after commit — the same-session gate (ADR 0047) prevents an AI from using its own in-flight work as trusted prior art.

### Setting up Cortex via an LLM

Paste [`docs/LLM_SETUP.md`](docs/LLM_SETUP.md) to any capable LLM and it will install, initialize, and configure Cortex for your project autonomously.

---

## Documentation

| Doc | What it covers |
|-----|---------------|
| [`docs/INSTALL.md`]docs/INSTALL.md | First-time install, upgrade, uninstall |
| [`docs/QUICKSTART.md`]docs/QUICKSTART.md | 5-minute path to first memory |
| [`docs/LLM_SETUP.md`]docs/LLM_SETUP.md | Copy-paste to any LLM to set up Cortex autonomously |
| [`docs/AUTOMATION.md`]docs/AUTOMATION.md | Zero-gate config — three ways to enable it |
| [`docs/MCP_GUIDE.md`]docs/MCP_GUIDE.md | All 24 MCP tools, parameter schemas, workflows |
| [`docs/USER_GUIDE.md`]docs/USER_GUIDE.md | Full operator guide |
| [`docs/WIRING_GUIDE.md`]docs/WIRING_GUIDE.md | End-to-end operator setup |
| [`docs/RUNBOOK.md`]docs/RUNBOOK.md | Operational procedures and drills |
| [`docs/help/QUICK_REFERENCE.md`]docs/help/QUICK_REFERENCE.md | One-liner cheat sheet |
| [`docs/THREAT_MODEL.md`]docs/THREAT_MODEL.md | Trust boundaries and residual risk |
| [`CHANGELOG.md`]CHANGELOG.md | Release notes and behavioral-change log |
| [API docs (docs.rs)]https://docs.rs/crtx | Rust crate API |

Architecture decisions: [`docs/adr/README.md`](docs/adr/README.md) · Design spec: [`docs/BUILD_SPEC.md`](docs/BUILD_SPEC.md)

---

## 60-second demo

Uses the installed binary. Run from the cortex repo root (for the fixture file) or point `SESSION` at any session JSON.

```bash
cortex init --db /tmp/demo.db --event-log /tmp/demo.jsonl

# Ingest a one-event session (idempotent — safe to re-run).
SESSION="crates/cortex-cli/tests/fixtures/session-minimal.json"
cortex ingest "$SESSION" --db /tmp/demo.db --event-log /tmp/demo.jsonl

# Verify the hash chain end-to-end.
cortex audit verify --db /tmp/demo.db --event-log /tmp/demo.jsonl

# Deterministic reflection via the replay adapter (no LLM, no network).
cortex reflect --trace trc_01ARZ3NDEKTSV4RRFFQ69G5FAW --model replay \
  --db /tmp/demo.db --event-log /tmp/demo.jsonl
```

Expected output:
```
cortex init: db = /tmp/demo.db (created)
appended evt_01ARZ3NDEKTSV4RRFFQ69G5FAV
appended_count = 1
audit verify: /tmp/demo.jsonl (1 rows scanned, 0 failures)
{"episode_candidates":[{"summary":"Demo trace: user greeted the cortex CLI.",...}]}
```

---

## Ollama (optional — local LLM for reflection and semantic search)

```bash
ollama serve                  # must be running before any Ollama-backed command
ollama pull llama3.1:8b       # for reflection / cortex run
ollama pull nomic-embed-text  # for semantic embeddings

cortex models list             # verify Ollama is reachable and list loaded models
cortex session close --live-reflect ./session.json   # real reflection via Ollama
cortex memory embed            # enrich memories with Ollama semantic embeddings
```

Configure in `~/.config/cortex/config.toml` (Linux/macOS) or `%APPDATA%\cortex\config.toml` (Windows):

```toml
[llm]
backend = "ollama"

[llm.ollama]
endpoint = "http://localhost:11434"
# Get the digest: ollama show --modelfile llama3.1:8b | grep "^FROM"
model    = "llama3.1:8b@sha256:<64-char-hex-digest>"

[embeddings]
backend = "ollama"

[embeddings.ollama]
model = "nomic-embed-text"
```

The model ref must be digest-pinned — `latest` and plain tags are refused. See [`docs/WIRING_GUIDE.md`](docs/WIRING_GUIDE.md) §16b for the full config reference.

---

## Workspace

13 crates published to crates.io under the `crtx-*` namespace:

| Crate | Role |
|---|---|
| [`crtx-core`]https://crates.io/crates/crtx-core | IDs, errors, schema version constants |
| [`crtx-ledger`]https://crates.io/crates/crtx-ledger | Append-only event log and hash chain |
| [`crtx-store`]https://crates.io/crates/crtx-store | SQLite persistence: migrations, repositories |
| [`crtx-memory`]https://crates.io/crates/crtx-memory | Memory lifecycle, salience, decay, contradictions |
| [`crtx-reflect`]https://crates.io/crates/crtx-reflect | Reflection orchestration (no doctrine promotion) |
| [`crtx-llm`]https://crates.io/crates/crtx-llm | Ollama, Claude, OpenAI-compat adapters behind one trait |
| [`crtx-retrieval`]https://crates.io/crates/crtx-retrieval | Hybrid retrieval: lexical + FTS5 + semantic |
| [`crtx-context`]https://crates.io/crates/crtx-context | Context pack assembly with token budgeting |
| [`crtx-runtime`]https://crates.io/crates/crtx-runtime | Child agent execution glue |
| [`crtx-session`]https://crates.io/crates/crtx-session | Session-close pipeline |
| [`crtx-mcp`]https://crates.io/crates/crtx-mcp | MCP stdio JSON-RPC 2.0 server + 24 tool handlers |
| [`crtx-verifier`]https://crates.io/crates/crtx-verifier | Pure trust-evidence reducer (ADR 0041) |
| [`crtx`]https://crates.io/crates/crtx | `cortex` CLI binary |

---

## Development

```bash
just check           # full quality gate: fmt + clippy + test + deny (mirrors CI)
just test            # cargo test --workspace
just publish-dry-run # verify all 13 crates package cleanly
```

Requires [`just`](https://github.com/casey/just) (`cargo install just`).  
Without it: `cargo fmt --all -- --check && cargo clippy --workspace --all-targets -- -D warnings && cargo test --workspace && cargo deny check`

CI runs Ubuntu + macOS + Windows in parallel via Azure DevOps Pipelines. Every push runs:
- `cargo check / clippy / fmt / test / deny`
- `taudit authority audit` (gates on High+ findings)
- Four drills: anchor, v1-to-v2, restore, axiom admission

---

## Contributing

See [`CONTRIBUTING.md`](CONTRIBUTING.md) for dev setup, architecture overview, and how to add CLI commands, MCP tools, and migrations.

## Security

See [`SECURITY.md`](SECURITY.md) to report vulnerabilities.  
Trust model and boundaries: [`docs/THREAT_MODEL.md`](docs/THREAT_MODEL.md).

---

## License

Licensed under either of [Apache License, Version 2.0](LICENSE) or
[MIT license](LICENSE) at your option.