wm — Working Memory for AI Coding Assistants
wm automatically captures tacit knowledge from your coding sessions and surfaces relevant context for each new task. It's the memory layer that helps AI assistants learn how you work.
Supported platforms:
- Claude Code - Full support via plugin
- OpenAI Codex CLI - Alpha support via skill (agent-invoked)
The Problem
LLMs have amnesia. Every conversation starts fresh. The patterns you've established, the constraints you've discovered, the preferences you've revealed—all forgotten.
You end up repeating yourself:
- "Remember, we always use X pattern here"
- "Don't forget the constraint about Y"
- "I prefer Z approach for this kind of problem"
The Solution
wm runs silently in the background:
- Extract: After each conversation turn, captures tacit knowledge—the wisdom that emerges from how you work, not just what you say
- Compile: Before each new prompt, filters accumulated knowledge for relevance and injects it as context
The result: AI assistants that remember your patterns across sessions.
What Gets Captured
Tacit knowledge is the unspoken wisdom in how someone works:
- Rationale behind decisions (WHY this approach, not just WHAT was done)
- Paths rejected and why (the judgment in pruning options)
- Constraints discovered through friction
- Preferences revealed by corrections
- Patterns followed without stating
Not captured:
- What happened ("Fixed X", "Updated Y")
- Explicit requests or questions
- Tool outputs or code snippets
- Content from CLAUDE.md (already explicit)
Quickstart: Claude Code
Prerequisites
- Claude Code CLI
- superego (optional, but recommended for extraction triggers)
Option 1: Homebrew (macOS)
Option 2: Cargo (all platforms)
Option 3: From Source
Install the Claude Code Plugin
Initialize in Your Project
This creates a .wm/ directory to store accumulated knowledge.
Quickstart: OpenAI Codex CLI (Alpha)
Codex support uses agent skills that can be invoked at decision points. Most features work, but session auto-discovery is limited due to Codex's different session storage format.
What works:
- ✅ Manual knowledge capture/review (state.md, working_set.md)
- ✅ Dive prep with context gathering
- ✅ Compress and pause operations
- ⚠️ Distill requires manual transcript paths (no auto-discovery yet)
# 1. Install the binary (choose one)
# 2. Install the skills
# Or download agents individually:
BASE_URL="https://raw.githubusercontent.com/cloud-atlas-ai/wm/main/plugin/agents"
Available Agent Skills:
| Skill | Support | When to Use |
|---|---|---|
$wm:dive-prep |
✅ Full | Prepare focused work session with intent, context, and workflow |
$wm:review |
✅ Full | Review accumulated knowledge and current context |
$wm:compress |
✅ Full | Synthesize state.md to higher-level abstractions |
$wm:pause |
✅ Full | Pause/resume operations (extract, compile, or both) |
$wm:distill |
⚠️ Manual | Batch extract (requires manual transcript paths) |
Typical Workflows:
# Session start - prepare dive context
# Mid-session - review what wm knows
# Maintenance - compress accumulated knowledge
# Sensitive work - pause extraction
# Manual extraction from Codex session (since auto-discovery isn't supported yet)
# Find your sessions:
# Extract from specific transcript:
Why limited session discovery?
Codex stores sessions in ~/.codex/sessions/YYYY/MM/DD/ with different naming and structure than Claude Code's ~/.claude/projects/<project-id>/. Auto-discovery support for Codex sessions is tracked in #11.
Manual Commands:
All CLI commands work normally: wm init, wm show state, wm show working, etc.
See plugin/agents/ for detailed documentation on each skill.
Usage
Once installed, wm works automatically:
- You write prompts → wm injects relevant context
- You work with Claude → conversation happens normally
- Turn ends → wm extracts any tacit knowledge learned
Manual Commands
# View accumulated knowledge
# View what context would be injected
# List all sessions
# View session-specific working set
# Manually trigger extraction
# Manually compile for a specific intent
# Compress state.md (synthesize to higher abstractions)
Compressing Knowledge
Over time, state.md accumulates knowledge and can grow unwieldy. The compress command distills it down by:
- Merging related items into broader principles
- Abstracting specific instances into general patterns
- Removing obsolete or superseded knowledge
- Preserving critical constraints and preferences
# Compressed: 42 → 18 lines (57% reduction)
# Backup saved to .wm/state.md.backup
Run periodically when state feels bloated, not after every session.
Dive Sessions
A dive is a focused work session with explicit grounding. The metaphor: you don't just splash around—you dive into work with a clear purpose, knowing what you're after and what constraints apply.
Without a dive, AI sessions often drift:
- You start coding without clarity on the goal
- Constraints surface mid-work ("oh wait, we can't do that because...")
- Related knowledge sits unused because nothing surfaced it
- The session ends without capturing what was learned
With a dive, you start grounded:
- Intent is explicit (fix, plan, explore, review, ship)
- Context is curated (relevant knowledge, constraints, mission)
- Workflow is suggested (what steps this intent typically follows)
- Focus is documented (what specifically you're working on)
This isn't overhead—it's the 30 seconds of setup that saves 30 minutes of drift.
The /dive-prep Skill
Invoke /dive-prep in Claude Code to prepare a focused work session:
What it does:
- Detects context — Reads CLAUDE.md, git state, existing
.wm/knowledge - Checks for OH — If Open Horizons MCP is connected, offers to link to an endeavor for strategic context (missions, guardrails, learnings)
- Asks for intent — If not provided, prompts for what you're trying to accomplish
- Builds workflow — Suggests steps based on intent type
- Writes manifest — Creates
.wm/dive_context.mdwith curated grounding
With Open Horizons (recommended):
OH provides the strategic layer: why you're doing this (mission), what not to do (guardrails), and what you've learned (metis).
Without OH:
# Prompts: "What are you exploring?"
Still valuable—you get explicit intent, workflow guidance, and documented focus.
The wm dive Commands
Manage dive context directly:
Dive packs are curated context bundles stored in Open Horizons. They're useful for recurring work patterns—load a pack instead of rebuilding context each time.
Configuration:
# Set OH API key (required for wm dive load)
# Or configure in ~/.config/openhorizons/config.json:
{
}
Batch Distillation
The distill command extracts knowledge from all your Claude Code sessions at once, instead of per-turn extraction:
How it works:
- Discovers sessions — Finds all Claude Code transcripts for this project
- Extracts incrementally — Caches results, only processes new/changed sessions
- Accumulates knowledge — Writes raw extractions to
.wm/distill/raw_extractions.md
When to use:
- Initial setup: extract knowledge from existing sessions
- Periodic catchup: if per-turn extraction was paused
- Audit: see what knowledge exists across all sessions
Output:
.wm/distill/
├── raw_extractions.md # Accumulated knowledge from all sessions
├── cache.json # Extraction cache (enables incremental runs)
└── errors.log # Any extraction failures
Pause and Resume
Temporarily disable wm operations without uninstalling:
When to use:
- Sensitive work: Pause extraction when working on confidential code
- Debugging: Isolate issues by disabling one operation
- Performance: Skip context injection on simple tasks
Debugging
# Watch hooks fire in real-time
# Check what's being captured
How It Works
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Claude Code Session │
├─────────────────────────────────────────────────────────────┤
│ │
│ [User Prompt] │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ wm compile │◄── Reads state.md │
│ │ (hook) │ Filters for relevance │
│ └────────┬────────┘ Injects as context │
│ │ │
│ ▼ │
│ [Claude Processes with Context] │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ sg evaluate │◄── Superego evaluates turn │
│ │ (stop hook) │ Calls wm extract in background │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ wm extract │◄── Reads transcript │
│ │ (background) │ Extracts tacit knowledge │
│ └─────────────────┘ Updates state.md │
│ │
└─────────────────────────────────────────────────────────────┘
Storage
.wm/
├── state.md # Accumulated tacit knowledge (the "memory")
├── working_set.md # Last compiled context
├── hook.log # Debug log
└── sessions/
└── <session-id>/ # Per-session state (prevents cross-session bleed)
Integration with Superego
wm is designed to work with superego, a metacognitive advisor for AI assistants. When both are installed:
- superego evaluates Claude's work and provides feedback
- superego's stop hook triggers wm extraction in the background
- wm captures knowledge, superego captures concerns—complementary roles
They compose via shell calls with no shared state (Unix philosophy).
Configuration
Environment Variables
| Variable | Purpose |
|---|---|
WM_DISABLED=1 |
Skip all wm operations |
CLAUDE_PROJECT_DIR |
Project root (auto-set by Claude Code) |
What to Expect
- First few sessions: Little or no knowledge captured (normal)
- Over time: Patterns accumulate in state.md
- Context injection: Only relevant items surface per-task
- LLM costs: Each extract/compile makes one LLM call (~$0.01-0.05)
Troubleshooting
Hooks not firing
- Check if wm is in PATH:
which wm - Check if
.wm/exists in project - Reinstall plugin after updates:
# Restart Claude Code
No knowledge being captured
- Most sessions genuinely have no tacit knowledge worth capturing
- Check
.wm/hook.logfor extraction activity - Verify superego is installed (it triggers extraction)
State.md has wrong content
- System reminders (CLAUDE.md content) should be stripped
- If seeing explicit instructions, check transcript reader is up to date
License
Source-available. See LICENSE.md for details.
Part of Cloud Atlas AI
wm is part of the Cloud Atlas AI ecosystem:
- Open Horizons — Strategic alignment platform
- superego — Metacognitive advisor
- wm — Working memory (this project)
- oh-mcp-server — MCP bridge to Open Horizons