Superego
A metacognitive advisor for AI coding assistants. Monitors conversations, evaluates the assistant's approach, and provides feedback before finishing.
Supported platforms:
- Claude Code - Full support via plugin
- OpenAI Codex CLI - Alpha support via skill (see codex-skill/)
- OpenCode - Alpha support via TypeScript plugin (see opencode-plugin/)
What It Does
When you use Claude Code (or OpenCode) with superego enabled:
- Session starts - Claude is told superego is active and to take feedback seriously
- Claude works - You interact normally with Claude
- Before large edits - Superego evaluates proposed changes in context (Edit/Write over 20 lines)
- Before Claude finishes - Superego evaluates the full conversation
- If concerns found - Claude is blocked and shown the feedback
- Claude continues - Incorporates feedback, may ask you clarifying questions
- Clean exit - Once addressed (or no concerns), Claude finishes normally
This creates feedback loops where Claude can course-correct both during work and before presenting results.
Quickstart: Claude Code
# 1. Install the plugin
# 2. Initialize in your project (installs binary if needed)
The /superego:init command detects if the binary is missing and offers to install it via Homebrew or Cargo.
Level Up: Strategic Alignment with Open Horizons
Superego provides metacognitive feedback—but feedback without strategic context is incomplete. For the full power of aligned AI development, combine superego with Open Horizons MCP:
# Add OH MCP marketplace
# Configure
What you get:
- Superego monitors how Claude works (metacognitive feedback)
- OH MCP connects why Claude works (strategic alignment)
- Every decision traces back to your missions and aims
- Claude logs decisions directly to your strategic framework
Learn more: OH MCP Server | Open Horizons
Slash Commands
| Command | Description |
|---|---|
/superego:init |
Initialize superego for this project (offers binary install if needed) |
/superego:status |
Check if plugin, binary, and project are configured |
/superego:prompt |
Manage prompts: list, switch (code/writing), show current |
/superego:review |
On-demand review of changes (staged, pr, or file) |
/superego:enable |
Enable superego (offers init if not set up) |
/superego:disable |
Temporarily disable for current session |
/superego:remove |
Remove superego from project |
Updating
# Restart Claude Code to apply
Manual binary installation
If you prefer to install the sg binary manually instead of via /superego:init:
# Homebrew (macOS)
# Cargo (cross-platform)
# From source
&&
Then run sg init in your project to create .superego/.
Quickstart: OpenCode (Alpha)
OpenCode support is in alpha. It uses a TypeScript plugin that runs entirely within OpenCode—no separate binary needed.
# 1. Download plugin to your project
# 2. Start OpenCode and initialize
# Ask: "use superego init"
Or install globally:
See opencode-plugin/README.md for build-from-source instructions and detailed configuration.
Quickstart: OpenAI Codex CLI (Alpha)
Codex support uses a skill that the agent can invoke at decision points.
# 1. Install the skill
# 2. In Codex, ask the agent to set up:
# "$superego init"
The $superego init command installs the binary, creates .superego/, and adds AGENTS.md guidance automatically.
After setup, the agent calls $superego at decision points to evaluate the conversation.
See codex-skill/ for details.
What You'll See
When superego has feedback, Claude will continue working instead of stopping, addressing concerns like:
- Scope drift from the current task
- Missing error handling or edge cases
- Approaches that don't align with project conventions
- Incomplete implementations
If Claude disagrees with non-trivial feedback, it will escalate to you for a decision.
Debugging
Check if hooks are firing
You'll see:
[15:42:01] Hook fired
[15:42:01] Running: sg evaluate-llm
[15:42:03] Evaluation complete
[15:42:03] Blocking with feedback: Consider error handling...
Check superego state
Manual evaluation
Reset everything
Migrating from legacy hooks
If you previously used sg init before v0.4.0 (which created .claude/hooks/superego/):
Customization
Prompt Types
Superego ships with multiple prompts for different use cases:
| Prompt | Description |
|---|---|
code |
Metacognitive advisor for coding agents (default) |
writing |
Co-author reviewer for writing and content creation |
Switch prompts via CLI or slash command:
# Or in Claude Code:
Your customizations are preserved when switching—each prompt type has its own backup (prompt.<type>.md.bak).
Custom Prompt Editing
Edit .superego/prompt.md to customize what superego evaluates:
- Add project-specific guidelines
- Adjust strictness
- Focus on particular concerns
Environment Variables
SUPEREGO_DISABLED=1- Disable superego entirelySUPEREGO_CHANGE_THRESHOLD=N- Lines required to trigger PreToolUse evaluation (default: 20)
How It Works
SessionStart hook
└── Injects contract: "SUPEREGO ACTIVE: critically evaluate feedback..."
PreToolUse hook (before any tool)
├── Checks if periodic eval is due (time-based)
├── For Edit/Write: checks change size (lines added/modified)
├── If >= threshold (default 20): runs sg evaluate-llm with pending change
├── If concerns: returns {"decision":"block","reason":"SUPEREGO FEEDBACK: ..."}
│ └── Claude sees feedback, reconsiders the change
└── If small or clean: allows tool execution
Stop hook (when Claude tries to finish)
├── Runs sg evaluate-llm
├── Reads transcript since last evaluation
├── Sends to LLM with superego prompt
├── If concerns: returns {"decision":"block","reason":"SUPEREGO FEEDBACK: ..."}
│ └── Claude sees feedback, continues working
└── If clean: allows stop
PreCompact hook (before context truncation)
└── Same as Stop - evaluates before transcript is lost
Commands
Requirements
- Claude Code CLI
jq(for hook JSON parsing)- Rust toolchain (to build from source) or Homebrew (for pre-built binary)
License
Source-available. See LICENSE for details.