harness
Run any coding agent from a single CLI. Harness spawns the selected agent as a subprocess, translates its native streaming output into a unified NDJSON event stream, and outputs it to stdout.
Write one integration, and it works with any supported agent backend.
Supported agents
| Agent | Binary | Status |
|---|---|---|
| Claude Code | claude |
Supported |
| OpenAI Codex | codex |
Supported |
| OpenCode | opencode |
Supported |
| Cursor | agent |
Supported |
Install
Recommended — curl installer:
|
Via crates.io:
Build from source:
# Binary at target/release/harness
Quick start
# Run Claude Code with a prompt
# Use a model alias
# Dry-run — see the resolved command without executing
# Pipe prompt from stdin
|
# List available agents
# Check if an agent is installed
Model registry
Harness maps human-friendly model names to the exact IDs each agent expects:
# List all known models
# Resolve an alias for a specific agent
# → claude-opus-4-6
# Update the registry cache
Built-in aliases include opus. You can add your own in harness.toml, and the cached registry at ~/.harness/models.toml is auto-updated from GitHub.
Configuration
Place a harness.toml in your project root (or any parent directory):
= "claude"
= "sonnet"
= "full-access"
= 300
[]
= "opus"
= ["--verbose"]
[]
= "My custom model"
= "anthropic"
= "my-custom-model-id"
Unified event stream
Every agent's output is translated into a common NDJSON format with 8 event types:
SessionStart— session initializedTextDelta— streaming text chunkMessage— complete messageToolStart— tool invocation beginningToolEnd— tool invocation completeUsageDelta— incremental token usage and cost updateResult— run finishedError— error occurred
Documentation
Full docs at harness.lol
License
MIT