wm — Working Memory for AI Coding Assistants
wm automatically captures tacit knowledge from your coding sessions and surfaces relevant context for each new task. It's the memory layer that helps AI assistants learn how you work.
The Problem
LLMs have amnesia. Every conversation starts fresh. The patterns you've established, the constraints you've discovered, the preferences you've revealed—all forgotten.
You end up repeating yourself:
- "Remember, we always use X pattern here"
- "Don't forget the constraint about Y"
- "I prefer Z approach for this kind of problem"
The Solution
wm runs silently in the background:
- Extract: After each conversation turn, captures tacit knowledge—the wisdom that emerges from how you work, not just what you say
- Compile: Before each new prompt, filters accumulated knowledge for relevance and injects it as context
The result: AI assistants that remember your patterns across sessions.
What Gets Captured
Tacit knowledge is the unspoken wisdom in how someone works:
- Rationale behind decisions (WHY this approach, not just WHAT was done)
- Paths rejected and why (the judgment in pruning options)
- Constraints discovered through friction
- Preferences revealed by corrections
- Patterns followed without stating
Not captured:
- What happened ("Fixed X", "Updated Y")
- Explicit requests or questions
- Tool outputs or code snippets
- Content from CLAUDE.md (already explicit)
Installation
Prerequisites
- Claude Code CLI
- superego (optional, but recommended)
Option 1: Homebrew (macOS)
Option 2: Cargo (all platforms)
Option 3: From Source
Install the Claude Code Plugin
Initialize in Your Project
This creates a .wm/ directory to store accumulated knowledge.
Usage
Once installed, wm works automatically:
- You write prompts → wm injects relevant context
- You work with Claude → conversation happens normally
- Turn ends → wm extracts any tacit knowledge learned
Manual Commands
# View accumulated knowledge
# View what context would be injected
# List all sessions
# View session-specific working set
# Manually trigger extraction
# Manually compile for a specific intent
# Compress state.md (synthesize to higher abstractions)
Compressing Knowledge
Over time, state.md accumulates knowledge and can grow unwieldy. The compress command distills it down by:
- Merging related items into broader principles
- Abstracting specific instances into general patterns
- Removing obsolete or superseded knowledge
- Preserving critical constraints and preferences
# Compressed: 42 → 18 lines (57% reduction)
# Backup saved to .wm/state.md.backup
Run periodically when state feels bloated, not after every session.
Debugging
# Watch hooks fire in real-time
# Check what's being captured
How It Works
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Claude Code Session │
├─────────────────────────────────────────────────────────────┤
│ │
│ [User Prompt] │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ wm compile │◄── Reads state.md │
│ │ (hook) │ Filters for relevance │
│ └────────┬────────┘ Injects as context │
│ │ │
│ ▼ │
│ [Claude Processes with Context] │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ sg evaluate │◄── Superego evaluates turn │
│ │ (stop hook) │ Calls wm extract in background │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ wm extract │◄── Reads transcript │
│ │ (background) │ Extracts tacit knowledge │
│ └─────────────────┘ Updates state.md │
│ │
└─────────────────────────────────────────────────────────────┘
Storage
.wm/
├── state.md # Accumulated tacit knowledge (the "memory")
├── working_set.md # Last compiled context
├── hook.log # Debug log
└── sessions/
└── <session-id>/ # Per-session state (prevents cross-session bleed)
Integration with Superego
wm is designed to work with superego, a metacognitive advisor for AI assistants. When both are installed:
- superego evaluates Claude's work and provides feedback
- superego's stop hook triggers wm extraction in the background
- wm captures knowledge, superego captures concerns—complementary roles
They compose via shell calls with no shared state (Unix philosophy).
Configuration
Environment Variables
| Variable | Purpose |
|---|---|
WM_DISABLED=1 |
Skip all wm operations |
CLAUDE_PROJECT_DIR |
Project root (auto-set by Claude Code) |
What to Expect
- First few sessions: Little or no knowledge captured (normal)
- Over time: Patterns accumulate in state.md
- Context injection: Only relevant items surface per-task
- LLM costs: Each extract/compile makes one LLM call (~$0.01-0.05)
Troubleshooting
Hooks not firing
- Check if wm is in PATH:
which wm - Check if
.wm/exists in project - Reinstall plugin after updates:
# Restart Claude Code
No knowledge being captured
- Most sessions genuinely have no tacit knowledge worth capturing
- Check
.wm/hook.logfor extraction activity - Verify superego is installed (it triggers extraction)
State.md has wrong content
- System reminders (CLAUDE.md content) should be stripped
- If seeing explicit instructions, check transcript reader is up to date
License
Source-available. See LICENSE.md for details.
Part of Cloud Atlas AI
wm is part of the Cloud Atlas AI ecosystem:
- Open Horizons — Strategic alignment platform
- superego — Metacognitive advisor
- wm — Working memory (this project)
- oh-mcp-server — MCP bridge to Open Horizons