kap
Run AI agents in secure capsules. Built on devcontainers with network controls and remote access.
- Domain allowlist - only approved domains are reachable from the container
- MCP tool allowlist - only approved tools are callable on remote MCP servers
- CLI proxy -
gh,aws, etc. are proxied with per-command allowlists - Credential isolation - tokens and API keys live on the sidecar and never enter the app container
- Remote monitoring - monitor and steer agents from your phone over local WiFi
[!WARNING] This is experimental and may have bugs. Use at your own risk.
Quick start
Requires Docker (or Colima, OrbStack, etc.) and the devcontainer CLI.
From inside the container, install and run Claude Code, Codex, or any AI agent. All network access is gated by the sidecar.
SSH agent forwarding
The host SSH agent is forwarded into the container by default, enabling git clone over SSH, commit signing, etc. On macOS this uses Docker Desktop's built-in SSH forwarding; on Linux it bind-mounts $SSH_AUTH_SOCK directly.
= true # set to false to disable
Domain allowlist
kap init generates a kap.toml with a domain allowlist. Defaults cover common package managers, registries, and AI providers.
[]
= [
"github.com",
"*.github.com",
"crates.io",
"*.crates.io",
"*.ubuntu.com",
]
# deny overrides allow:
= ["gist.github.com"]
Wildcards (*.github.com) match subdomains but not the bare domain. Deny rules always win. Changes to kap.toml are hot-reloaded — no container restart needed.
Global config
~/.kap/kap.toml applies to all projects. Same format as the project config — domains, CLI tools, and MCP servers are merged automatically. Project settings override global settings for same-name tools/servers.
# ~/.kap/kap.toml
[]
= ["*.internal.corp.com", "artifactory.corp.com"]
[]
[[]]
= "aws"
= ["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"]
= ["s3 *", "sts get-caller-identity"]
MCP proxy
The MCP proxy sits between the agent and remote MCP servers, injecting credentials when forwarding upstream. The app container never sees tokens or API keys.
Register servers on the host:
# OAuth (opens browser)
# API key via headers
Then add each server to kap.toml with an allow/deny list:
[]
# Allow all tools
[[]]
= "context7"
= ["*"]
# Allow only read/search operations
[[]]
= "github"
= ["get_*", "list_*", "search_*"]
deny overrides allow, same as domain and CLI rules. Wildcards work the same as domain patterns (get_* matches get_issue, get_user, etc.).
CLI proxy
The CLI proxy lets the app container run tools like gh or aws without direct access to credentials. Two modes:
- Proxy mode (default): the sidecar executes the command and returns stdout/stderr. Credentials never enter the app container. Use
allow/denyto restrict subcommands. - Direct mode: the sidecar sends credentials to the app container, which runs the command locally. Needed for commands that write files (e.g.
gh run download). The tool must be installed in the app container.
[]
# Proxy mode: sidecar executes, credentials stay isolated
[[]]
= "aws"
= ["s3 ls *", "s3 cp *", "sts get-caller-identity"]
= ["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"]
# Direct mode: runs locally in the app container
[[]]
= "gh"
= "direct"
In proxy mode, deny overrides allow. In direct mode, allow/deny are not enforced (the command runs locally). For direct mode, env is optional — kap auto-detects env vars for known tools (e.g. gh → GH_TOKEN).
Remote access
Monitor and steer AI agents running in devcontainers from your phone over local WiFi.
Scan the QR code on your phone to open the web UI. It auto-pairs and gives you:
- Status -container state, proxy health, denied request count
- Logs -live streaming proxy events, filterable by denied-only
- Agent -Claude Code session timelines, tool calls, cancel button, follow-up prompts
The daemon runs on the host. All API endpoints require a bearer token issued during QR pairing.
How it works
kap up starts two containers on an internal Docker network: your app container (isolated, no internet) and a kap sidecar (controls all outbound access). The sidecar pulls from ghcr.io/6/kap:latest by default, or set [compose] build in kap.toml to build from source.
graph LR
subgraph internal ["Internal Docker network"]
subgraph app ["App container"]
Agent["AI agent<br/>(Claude Code, Codex, etc.)"]
end
subgraph sidecar ["kap sidecar"]
DP["Domain proxy :3128"]
DNS["DNS forwarder :53"]
MCP["MCP proxy :3129"]
CLI["CLI proxy :3130"]
end
end
Agent -- "HTTP_PROXY" --> DP --> Internet
Agent -- "DNS" --> DNS --> Upstream_DNS["Upstream DNS"]
Agent -- "MCP" --> MCP --> MCP_Servers["MCP servers"]
Agent -- "gh, aws, ..." --> CLI --> APIs
style internal fill:#f5f5f5,stroke:#bbb,color:#333
style app fill:#fff0f0,stroke:#e94560,color:#333
style sidecar fill:#f0f4ff,stroke:#4a7fd4,color:#333
style Agent fill:#ffdede,stroke:#e94560,color:#333
style DP fill:#dce6f7,stroke:#4a7fd4,color:#333
style DNS fill:#dce6f7,stroke:#4a7fd4,color:#333
style MCP fill:#dce6f7,stroke:#4a7fd4,color:#333
style CLI fill:#dce6f7,stroke:#4a7fd4,color:#333
- The app container has no external network route. All traffic goes through the sidecar.
- DNS queries only resolve allowed domains. Disallowed domains get NXDOMAIN.
- Blocked requests get a 403 (domains) or JSON-RPC error (MCP tools).
- Credentials never enter the app container (in proxy mode). Direct mode CLI tools receive credentials at exec time, but the domain proxy still controls what the container can reach.
Security model
Network isolation is kernel-enforced, not proxy-based. The Docker internal: true network has no default gateway, so the app container has no IP route to the outside world. Unsetting HTTP_PROXY or making direct TCP connections doesn't bypass it. The only reachable host is the sidecar.
MCP server domains are intentionally not in the domain allowlist. The agent can only reach them through the MCP proxy, which enforces tool filtering.
Known limitations:
- Domain fronting (partial): kap validates that the TLS SNI in the tunnel matches the CONNECT domain, blocking SNI-mismatch attacks. Classic domain fronting (where SNI matches but the encrypted HTTP Host header differs) is not detected because it requires TLS interception, which kap intentionally avoids. Most major CDNs have disabled domain fronting.
- Container escape: a kernel exploit that breaks out of the container bypasses all isolation. Not specific to kap. Running Docker inside a VM (e.g., Docker Desktop, Firecracker) adds defense-in-depth.
- No TLS inspection: kap controls which domains are reachable, not what happens on them. It does not MITM HTTPS traffic. Once a domain is allowed, the agent has full access to that domain's API.
Commands
| Command | Purpose |
|---|---|
kap init |
Scaffold kap into a project |
kap up |
Start the devcontainer |
kap down |
Stop and remove the devcontainer |
kap exec [cmd] |
Run a command in the devcontainer (default: shell) |
kap list |
List running devcontainers |
kap status |
Check if everything is wired correctly |
kap why-denied |
Show denied requests from the proxy log |
kap mcp add <name> <url> |
Register an MCP server (OAuth or API key) |
kap mcp get <name> |
Show server details and tools list |
kap mcp list |
List registered servers |
kap mcp remove <name> |
Remove a registered server |
kap remote start |
Start the remote access daemon (shows QR code) |
kap remote stop |
Stop the remote access daemon |
kap remote status |
Show daemon status and paired devices |
kap remote revoke <id> |
Revoke a paired device |
Development
This repo dogfoods kap via its own .devcontainer/. Open in VS Code or run: